Test Report: Docker_Linux_crio_arm64 21683

                    
                      cf2611189ddf0f856b4ad9653dc441b770ddd00e:2025-10-02:41739
                    
                

Test fail (43/326)

Order failed test Duration
29 TestAddons/serial/Volcano 0.5
35 TestAddons/parallel/Registry 15.17
36 TestAddons/parallel/RegistryCreds 0.5
37 TestAddons/parallel/Ingress 484.36
38 TestAddons/parallel/InspektorGadget 6.28
39 TestAddons/parallel/MetricsServer 5.35
41 TestAddons/parallel/CSI 371.6
42 TestAddons/parallel/Headlamp 3
43 TestAddons/parallel/CloudSpanner 5.28
44 TestAddons/parallel/LocalPath 303.29
45 TestAddons/parallel/NvidiaDevicePlugin 5.27
46 TestAddons/parallel/Yakd 6.26
52 TestForceSystemdFlag 516.02
53 TestForceSystemdEnv 513.86
91 TestFunctional/parallel/DashboardCmd 302.51
98 TestFunctional/parallel/ServiceCmdConnect 603.52
100 TestFunctional/parallel/PersistentVolumeClaim 249.44
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 241
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 93.25
125 TestFunctional/parallel/ServiceCmd/DeployApp 601.3
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
135 TestFunctional/parallel/ServiceCmd/Format 0.39
136 TestFunctional/parallel/ServiceCmd/URL 0.39
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.91
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
190 TestJSONOutput/pause/Command 2.35
196 TestJSONOutput/unpause/Command 1.99
247 TestPreload 443.37
280 TestPause/serial/Pause 7.12
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.42
304 TestStartStop/group/old-k8s-version/serial/Pause 7.53
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.63
316 TestStartStop/group/no-preload/serial/Pause 7.24
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.36
326 TestStartStop/group/embed-certs/serial/Pause 6.76
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 4
335 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.18
342 TestStartStop/group/newest-cni/serial/Pause 5.99
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.76
x
+
TestAddons/serial/Volcano (0.5s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-693704 addons disable volcano --alsologtostderr -v=1: exit status 11 (501.792168ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:21:33.663946 1000574 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:21:33.664831 1000574 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:21:33.664872 1000574 out.go:374] Setting ErrFile to fd 2...
	I1002 20:21:33.664890 1000574 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:21:33.665373 1000574 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:21:33.665727 1000574 mustload.go:65] Loading cluster: addons-693704
	I1002 20:21:33.666196 1000574 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:21:33.666239 1000574 addons.go:606] checking whether the cluster is paused
	I1002 20:21:33.666389 1000574 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:21:33.666428 1000574 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:21:33.666927 1000574 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:21:33.708848 1000574 ssh_runner.go:195] Run: systemctl --version
	I1002 20:21:33.708923 1000574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:21:33.726203 1000574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:21:33.832717 1000574 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:21:33.832809 1000574 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:21:33.867530 1000574 cri.go:89] found id: "6928dd54cd320350a0574bce9e99e262d52ef2ad32927799eb1e85f4f8e93f52"
	I1002 20:21:33.867556 1000574 cri.go:89] found id: "8860f0e019516ca00a020669c8250eb699147392f2fd0ab332eef75f31694ae5"
	I1002 20:21:33.867561 1000574 cri.go:89] found id: "36c49020464e244780926156984b457b4f46db8eacc0cd8fc4a9c835da280358"
	I1002 20:21:33.867570 1000574 cri.go:89] found id: "b7161126faae3543c45089c1c2a743e531e0d2ec905126b94b00abc8b3f4e86b"
	I1002 20:21:33.867574 1000574 cri.go:89] found id: "ee97eb0b32c7fe49fe1575e1d85aaabcb7c2a4c12bc8d6e96ec8ec1ace15eb83"
	I1002 20:21:33.867577 1000574 cri.go:89] found id: "fc0714b2fd72f680a79055bd40db196753ebd675e00b57fdbbb1311e6f52f0c6"
	I1002 20:21:33.867580 1000574 cri.go:89] found id: "16f4af5cddb751e7f70cd54a1b075a0e52070ec2ab609d7dd4b6382625d77f3c"
	I1002 20:21:33.867583 1000574 cri.go:89] found id: "91fa943497ee5a780fabd8bb6f6a4c48256ff45ea9d2abbfd6fe7173c099a981"
	I1002 20:21:33.867586 1000574 cri.go:89] found id: "439510daf689e871d24092dcfc8fc87def85821aa672fe72a12cd5370bc3106f"
	I1002 20:21:33.867592 1000574 cri.go:89] found id: "063fa56393267278e05822086a20d260f8a6eab5ccf9daa3feabae6e691670f2"
	I1002 20:21:33.867595 1000574 cri.go:89] found id: "948a7498f368d7c9cddb6833384a3cb174bb78753aa09668ec42b4d2860ebf4e"
	I1002 20:21:33.867598 1000574 cri.go:89] found id: "bbd0c0fdbe9489f01321a2881ee0a43621fde9603022a8e6e320b98adde72078"
	I1002 20:21:33.867602 1000574 cri.go:89] found id: "4a5b5d50e1426c0038720273cc7b8abbf6622876cd1ba10fda5cf36d7b50e6ab"
	I1002 20:21:33.867605 1000574 cri.go:89] found id: "4757a91ace2d4f7cb4fd077cfa0e537a6b0a1a55cfca1ddd7afa9cdee5de2849"
	I1002 20:21:33.867608 1000574 cri.go:89] found id: "88520ea2c4ca73535bd06329fc29000af61986adf6366f19c9a796b83180d153"
	I1002 20:21:33.867613 1000574 cri.go:89] found id: "ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b"
	I1002 20:21:33.867616 1000574 cri.go:89] found id: "165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa"
	I1002 20:21:33.867620 1000574 cri.go:89] found id: "cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0"
	I1002 20:21:33.867623 1000574 cri.go:89] found id: "0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1"
	I1002 20:21:33.867626 1000574 cri.go:89] found id: "972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3"
	I1002 20:21:33.867631 1000574 cri.go:89] found id: "020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251"
	I1002 20:21:33.867634 1000574 cri.go:89] found id: "ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c"
	I1002 20:21:33.867636 1000574 cri.go:89] found id: "71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba"
	I1002 20:21:33.867639 1000574 cri.go:89] found id: ""
	I1002 20:21:33.867696 1000574 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 20:21:33.883839 1000574 out.go:203] 
	W1002 20:21:33.886883 1000574 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:21:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:21:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 20:21:33.886913 1000574 out.go:285] * 
	* 
	W1002 20:21:34.042300 1000574 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:21:34.045289 1000574 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-693704 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.823793ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003655267s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003016545s
addons_test.go:392: (dbg) Run:  kubectl --context addons-693704 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-693704 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-693704 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.262150503s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 ip
2025/10/02 20:21:59 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-693704 addons disable registry --alsologtostderr -v=1: exit status 11 (651.872534ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:22:00.064390 1001058 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:22:00.098234 1001058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:22:00.098592 1001058 out.go:374] Setting ErrFile to fd 2...
	I1002 20:22:00.098609 1001058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:22:00.098959 1001058 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:22:00.099361 1001058 mustload.go:65] Loading cluster: addons-693704
	I1002 20:22:00.099791 1001058 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:22:00.099804 1001058 addons.go:606] checking whether the cluster is paused
	I1002 20:22:00.099915 1001058 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:22:00.099934 1001058 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:22:00.100466 1001058 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:22:00.189052 1001058 ssh_runner.go:195] Run: systemctl --version
	I1002 20:22:00.189130 1001058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:22:00.291657 1001058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:22:00.498868 1001058 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:22:00.499011 1001058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:22:00.571997 1001058 cri.go:89] found id: "6928dd54cd320350a0574bce9e99e262d52ef2ad32927799eb1e85f4f8e93f52"
	I1002 20:22:00.572069 1001058 cri.go:89] found id: "8860f0e019516ca00a020669c8250eb699147392f2fd0ab332eef75f31694ae5"
	I1002 20:22:00.572094 1001058 cri.go:89] found id: "36c49020464e244780926156984b457b4f46db8eacc0cd8fc4a9c835da280358"
	I1002 20:22:00.572118 1001058 cri.go:89] found id: "b7161126faae3543c45089c1c2a743e531e0d2ec905126b94b00abc8b3f4e86b"
	I1002 20:22:00.572153 1001058 cri.go:89] found id: "ee97eb0b32c7fe49fe1575e1d85aaabcb7c2a4c12bc8d6e96ec8ec1ace15eb83"
	I1002 20:22:00.572187 1001058 cri.go:89] found id: "fc0714b2fd72f680a79055bd40db196753ebd675e00b57fdbbb1311e6f52f0c6"
	I1002 20:22:00.572207 1001058 cri.go:89] found id: "16f4af5cddb751e7f70cd54a1b075a0e52070ec2ab609d7dd4b6382625d77f3c"
	I1002 20:22:00.572235 1001058 cri.go:89] found id: "91fa943497ee5a780fabd8bb6f6a4c48256ff45ea9d2abbfd6fe7173c099a981"
	I1002 20:22:00.572272 1001058 cri.go:89] found id: "439510daf689e871d24092dcfc8fc87def85821aa672fe72a12cd5370bc3106f"
	I1002 20:22:00.572310 1001058 cri.go:89] found id: "063fa56393267278e05822086a20d260f8a6eab5ccf9daa3feabae6e691670f2"
	I1002 20:22:00.572341 1001058 cri.go:89] found id: "948a7498f368d7c9cddb6833384a3cb174bb78753aa09668ec42b4d2860ebf4e"
	I1002 20:22:00.572383 1001058 cri.go:89] found id: "bbd0c0fdbe9489f01321a2881ee0a43621fde9603022a8e6e320b98adde72078"
	I1002 20:22:00.572414 1001058 cri.go:89] found id: "4a5b5d50e1426c0038720273cc7b8abbf6622876cd1ba10fda5cf36d7b50e6ab"
	I1002 20:22:00.572436 1001058 cri.go:89] found id: "4757a91ace2d4f7cb4fd077cfa0e537a6b0a1a55cfca1ddd7afa9cdee5de2849"
	I1002 20:22:00.572462 1001058 cri.go:89] found id: "88520ea2c4ca73535bd06329fc29000af61986adf6366f19c9a796b83180d153"
	I1002 20:22:00.572498 1001058 cri.go:89] found id: "ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b"
	I1002 20:22:00.572539 1001058 cri.go:89] found id: "165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa"
	I1002 20:22:00.572565 1001058 cri.go:89] found id: "cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0"
	I1002 20:22:00.572588 1001058 cri.go:89] found id: "0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1"
	I1002 20:22:00.572619 1001058 cri.go:89] found id: "972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3"
	I1002 20:22:00.572650 1001058 cri.go:89] found id: "020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251"
	I1002 20:22:00.572670 1001058 cri.go:89] found id: "ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c"
	I1002 20:22:00.572692 1001058 cri.go:89] found id: "71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba"
	I1002 20:22:00.572724 1001058 cri.go:89] found id: ""
	I1002 20:22:00.572869 1001058 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 20:22:00.590574 1001058 out.go:203] 
	W1002 20:22:00.593419 1001058 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:22:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:22:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 20:22:00.593478 1001058 out.go:285] * 
	* 
	W1002 20:22:00.602732 1001058 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:22:00.605969 1001058 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-693704 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.17s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.5s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.336402ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-693704
addons_test.go:332: (dbg) Run:  kubectl --context addons-693704 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-693704 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (283.259268ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:28:03.596381 1005739 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:28:03.597503 1005739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:28:03.597543 1005739 out.go:374] Setting ErrFile to fd 2...
	I1002 20:28:03.597564 1005739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:28:03.597870 1005739 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:28:03.598267 1005739 mustload.go:65] Loading cluster: addons-693704
	I1002 20:28:03.598655 1005739 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:28:03.598686 1005739 addons.go:606] checking whether the cluster is paused
	I1002 20:28:03.598812 1005739 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:28:03.598844 1005739 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:28:03.599323 1005739 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:28:03.617047 1005739 ssh_runner.go:195] Run: systemctl --version
	I1002 20:28:03.617110 1005739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:28:03.639725 1005739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:28:03.736273 1005739 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:28:03.736356 1005739 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:28:03.772665 1005739 cri.go:89] found id: "6928dd54cd320350a0574bce9e99e262d52ef2ad32927799eb1e85f4f8e93f52"
	I1002 20:28:03.772694 1005739 cri.go:89] found id: "8860f0e019516ca00a020669c8250eb699147392f2fd0ab332eef75f31694ae5"
	I1002 20:28:03.772699 1005739 cri.go:89] found id: "36c49020464e244780926156984b457b4f46db8eacc0cd8fc4a9c835da280358"
	I1002 20:28:03.772703 1005739 cri.go:89] found id: "b7161126faae3543c45089c1c2a743e531e0d2ec905126b94b00abc8b3f4e86b"
	I1002 20:28:03.772707 1005739 cri.go:89] found id: "ee97eb0b32c7fe49fe1575e1d85aaabcb7c2a4c12bc8d6e96ec8ec1ace15eb83"
	I1002 20:28:03.772710 1005739 cri.go:89] found id: "fc0714b2fd72f680a79055bd40db196753ebd675e00b57fdbbb1311e6f52f0c6"
	I1002 20:28:03.772713 1005739 cri.go:89] found id: "16f4af5cddb751e7f70cd54a1b075a0e52070ec2ab609d7dd4b6382625d77f3c"
	I1002 20:28:03.772716 1005739 cri.go:89] found id: "91fa943497ee5a780fabd8bb6f6a4c48256ff45ea9d2abbfd6fe7173c099a981"
	I1002 20:28:03.772720 1005739 cri.go:89] found id: "439510daf689e871d24092dcfc8fc87def85821aa672fe72a12cd5370bc3106f"
	I1002 20:28:03.772726 1005739 cri.go:89] found id: "063fa56393267278e05822086a20d260f8a6eab5ccf9daa3feabae6e691670f2"
	I1002 20:28:03.772730 1005739 cri.go:89] found id: "948a7498f368d7c9cddb6833384a3cb174bb78753aa09668ec42b4d2860ebf4e"
	I1002 20:28:03.772746 1005739 cri.go:89] found id: "bbd0c0fdbe9489f01321a2881ee0a43621fde9603022a8e6e320b98adde72078"
	I1002 20:28:03.772749 1005739 cri.go:89] found id: "4a5b5d50e1426c0038720273cc7b8abbf6622876cd1ba10fda5cf36d7b50e6ab"
	I1002 20:28:03.772752 1005739 cri.go:89] found id: "4757a91ace2d4f7cb4fd077cfa0e537a6b0a1a55cfca1ddd7afa9cdee5de2849"
	I1002 20:28:03.772754 1005739 cri.go:89] found id: "88520ea2c4ca73535bd06329fc29000af61986adf6366f19c9a796b83180d153"
	I1002 20:28:03.772759 1005739 cri.go:89] found id: "ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b"
	I1002 20:28:03.772762 1005739 cri.go:89] found id: "165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa"
	I1002 20:28:03.772766 1005739 cri.go:89] found id: "cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0"
	I1002 20:28:03.772769 1005739 cri.go:89] found id: "0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1"
	I1002 20:28:03.772772 1005739 cri.go:89] found id: "972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3"
	I1002 20:28:03.772777 1005739 cri.go:89] found id: "020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251"
	I1002 20:28:03.772780 1005739 cri.go:89] found id: "ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c"
	I1002 20:28:03.772783 1005739 cri.go:89] found id: "71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba"
	I1002 20:28:03.772786 1005739 cri.go:89] found id: ""
	I1002 20:28:03.772834 1005739 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 20:28:03.787239 1005739 out.go:203] 
	W1002 20:28:03.790105 1005739 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:28:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:28:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 20:28:03.790130 1005739 out.go:285] * 
	* 
	W1002 20:28:03.797816 1005739 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:28:03.800659 1005739 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-693704 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.50s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (484.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-693704 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-693704 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-693704 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [56f5bc51-854e-47f6-a9a2-ee03227a1b18] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-693704 -n addons-693704
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-10-02 20:35:30.13707143 +0000 UTC m=+1046.237007919
addons_test.go:252: (dbg) Run:  kubectl --context addons-693704 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-693704 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-693704/192.168.49.2
Start Time:       Thu, 02 Oct 2025 20:27:29 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.27
IPs:
IP:  10.244.0.27
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rqkzj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rqkzj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  8m1s                   default-scheduler  Successfully assigned default/nginx to addons-693704
Warning  Failed     3m13s (x2 over 6m51s)  kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m13s (x2 over 6m51s)  kubelet            Error: ErrImagePull
Normal   BackOff    2m59s (x2 over 6m50s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2m59s (x2 over 6m50s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    2m47s (x3 over 8m)     kubelet            Pulling image "docker.io/nginx:alpine"
addons_test.go:252: (dbg) Run:  kubectl --context addons-693704 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-693704 logs nginx -n default: exit status 1 (108.857018ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-693704 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-693704
helpers_test.go:243: (dbg) docker inspect addons-693704:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277",
	        "Created": "2025-10-02T20:19:07.144298893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 995109,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:19:07.216699876Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/hostname",
	        "HostsPath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/hosts",
	        "LogPath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277-json.log",
	        "Name": "/addons-693704",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-693704:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-693704",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277",
	                "LowerDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997/merged",
	                "UpperDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997/diff",
	                "WorkDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-693704",
	                "Source": "/var/lib/docker/volumes/addons-693704/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-693704",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-693704",
	                "name.minikube.sigs.k8s.io": "addons-693704",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab8175306a77dcd2868d77b0652aff78896362c7258aefc47fe7a07059e18c86",
	            "SandboxKey": "/var/run/docker/netns/ab8175306a77",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-693704": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:98:f0:2f:5f:be",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b2b7a73ec267c22f9c2a0b05d90a02bfb26f74cfccf22ef9af628da6d1b040f0",
	                    "EndpointID": "a29bf68bc8126d88282105e99c5ad7822f95d3abd8c683fc3272ac8e0ad9c3f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-693704",
	                        "d39c48e99245"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-693704 -n addons-693704
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-693704 logs -n 25: (1.339364921s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-926391                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-926391   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ delete  │ -p download-only-569491                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-569491   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ start   │ --download-only -p download-docker-496636 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-496636 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ -p download-docker-496636                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-496636 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ start   │ --download-only -p binary-mirror-261948 --alsologtostderr --binary-mirror http://127.0.0.1:38235 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-261948   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ -p binary-mirror-261948                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-261948   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ addons  │ disable dashboard -p addons-693704                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ addons  │ enable dashboard -p addons-693704                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ start   │ -p addons-693704 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:21 UTC │
	│ addons  │ addons-693704 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │                     │
	│ addons  │ addons-693704 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │                     │
	│ addons  │ addons-693704 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │                     │
	│ ip      │ addons-693704 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │ 02 Oct 25 20:21 UTC │
	│ addons  │ addons-693704 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ addons  │ addons-693704 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ addons  │ addons-693704 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ addons  │ addons-693704 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ addons  │ enable headlamp -p addons-693704 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ addons  │ addons-693704 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ addons  │ addons-693704 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ addons  │ addons-693704 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ addons  │ addons-693704 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ addons  │ addons-693704 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-693704                                                                                                                                                                                                                                                                                                                                                                                           │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │ 02 Oct 25 20:28 UTC │
	│ addons  │ addons-693704 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:18:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:18:42.587429  994709 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:18:42.587660  994709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:42.587694  994709 out.go:374] Setting ErrFile to fd 2...
	I1002 20:18:42.587713  994709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:42.588005  994709 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:18:42.588496  994709 out.go:368] Setting JSON to false
	I1002 20:18:42.589377  994709 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18060,"bootTime":1759418263,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 20:18:42.589480  994709 start.go:140] virtualization:  
	I1002 20:18:42.592863  994709 out.go:179] * [addons-693704] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:18:42.596651  994709 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:18:42.596802  994709 notify.go:221] Checking for updates...
	I1002 20:18:42.602490  994709 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:18:42.605403  994709 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:18:42.608387  994709 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 20:18:42.611210  994709 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:18:42.614017  994709 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:18:42.617196  994709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:18:42.641430  994709 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:18:42.641548  994709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:42.702297  994709 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-02 20:18:42.693145863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:18:42.702404  994709 docker.go:319] overlay module found
	I1002 20:18:42.705389  994709 out.go:179] * Using the docker driver based on user configuration
	I1002 20:18:42.708231  994709 start.go:306] selected driver: docker
	I1002 20:18:42.708247  994709 start.go:936] validating driver "docker" against <nil>
	I1002 20:18:42.708259  994709 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:18:42.708953  994709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:42.762696  994709 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-02 20:18:42.753788413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:18:42.762850  994709 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:18:42.763087  994709 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:18:42.766074  994709 out.go:179] * Using Docker driver with root privileges
	I1002 20:18:42.768763  994709 cni.go:84] Creating CNI manager for ""
	I1002 20:18:42.768836  994709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:18:42.768849  994709 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:18:42.768919  994709 start.go:350] cluster config:
	{Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1002 20:18:42.771909  994709 out.go:179] * Starting "addons-693704" primary control-plane node in "addons-693704" cluster
	I1002 20:18:42.774712  994709 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:18:42.777590  994709 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:18:42.780428  994709 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:42.780455  994709 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:18:42.780491  994709 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 20:18:42.780500  994709 cache.go:59] Caching tarball of preloaded images
	I1002 20:18:42.780575  994709 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 20:18:42.780584  994709 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:18:42.780914  994709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/config.json ...
	I1002 20:18:42.780943  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/config.json: {Name:mkd60ee77440eccb122eacb378637e77c2fde5a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:18:42.795665  994709 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:18:42.795798  994709 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 20:18:42.795824  994709 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 20:18:42.795836  994709 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 20:18:42.795846  994709 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 20:18:42.795852  994709 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 20:19:00.985065  994709 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 20:19:00.985108  994709 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:19:00.985137  994709 start.go:361] acquireMachinesLock for addons-693704: {Name:mkeb9eb5752430ab2d33310b44640ce93b8d2df4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:19:00.985263  994709 start.go:365] duration metric: took 102.298µs to acquireMachinesLock for "addons-693704"
	I1002 20:19:00.985295  994709 start.go:94] Provisioning new machine with config: &{Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:19:00.985372  994709 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:19:00.988832  994709 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 20:19:00.989104  994709 start.go:160] libmachine.API.Create for "addons-693704" (driver="docker")
	I1002 20:19:00.989159  994709 client.go:168] LocalClient.Create starting
	I1002 20:19:00.989296  994709 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem
	I1002 20:19:01.433837  994709 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem
	I1002 20:19:01.564238  994709 cli_runner.go:164] Run: docker network inspect addons-693704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:19:01.580044  994709 cli_runner.go:211] docker network inspect addons-693704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:19:01.580136  994709 network_create.go:284] running [docker network inspect addons-693704] to gather additional debugging logs...
	I1002 20:19:01.580158  994709 cli_runner.go:164] Run: docker network inspect addons-693704
	W1002 20:19:01.596534  994709 cli_runner.go:211] docker network inspect addons-693704 returned with exit code 1
	I1002 20:19:01.596569  994709 network_create.go:287] error running [docker network inspect addons-693704]: docker network inspect addons-693704: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-693704 not found
	I1002 20:19:01.596590  994709 network_create.go:289] output of [docker network inspect addons-693704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-693704 not found
	
	** /stderr **
	I1002 20:19:01.596688  994709 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:19:01.612608  994709 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018f17c0}
	I1002 20:19:01.612647  994709 network_create.go:124] attempt to create docker network addons-693704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:19:01.612711  994709 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-693704 addons-693704
	I1002 20:19:01.677264  994709 network_create.go:108] docker network addons-693704 192.168.49.0/24 created
	I1002 20:19:01.677303  994709 kic.go:121] calculated static IP "192.168.49.2" for the "addons-693704" container
	I1002 20:19:01.677378  994709 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:19:01.693107  994709 cli_runner.go:164] Run: docker volume create addons-693704 --label name.minikube.sigs.k8s.io=addons-693704 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:19:01.711600  994709 oci.go:103] Successfully created a docker volume addons-693704
	I1002 20:19:01.711704  994709 cli_runner.go:164] Run: docker run --rm --name addons-693704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-693704 --entrypoint /usr/bin/test -v addons-693704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:19:02.731832  994709 cli_runner.go:217] Completed: docker run --rm --name addons-693704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-693704 --entrypoint /usr/bin/test -v addons-693704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (1.020058685s)
	I1002 20:19:02.731865  994709 oci.go:107] Successfully prepared a docker volume addons-693704
	I1002 20:19:02.731897  994709 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:19:02.731915  994709 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:19:02.731979  994709 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-693704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:19:07.072259  994709 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-693704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.340238594s)
	I1002 20:19:07.072312  994709 kic.go:203] duration metric: took 4.340372991s to extract preloaded images to volume ...
	W1002 20:19:07.072445  994709 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 20:19:07.072554  994709 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:19:07.131614  994709 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-693704 --name addons-693704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-693704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-693704 --network addons-693704 --ip 192.168.49.2 --volume addons-693704:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:19:07.425756  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Running}}
	I1002 20:19:07.450427  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:07.471353  994709 cli_runner.go:164] Run: docker exec addons-693704 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:19:07.519322  994709 oci.go:144] the created container "addons-693704" has a running status.
	I1002 20:19:07.519348  994709 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa...
	I1002 20:19:07.874970  994709 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:19:07.902253  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:07.924631  994709 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:19:07.924649  994709 kic_runner.go:114] Args: [docker exec --privileged addons-693704 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:19:07.982879  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:08.009002  994709 machine.go:93] provisionDockerMachine start ...
	I1002 20:19:08.009096  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:08.026925  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:08.027256  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:08.027273  994709 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:19:08.027902  994709 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 20:19:11.161848  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-693704
	
	I1002 20:19:11.161874  994709 ubuntu.go:182] provisioning hostname "addons-693704"
	I1002 20:19:11.161998  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.180011  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.180318  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:11.180334  994709 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-693704 && echo "addons-693704" | sudo tee /etc/hostname
	I1002 20:19:11.318599  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-693704
	
	I1002 20:19:11.318673  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.334766  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.335074  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:11.335095  994709 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-693704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-693704/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-693704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:19:11.466309  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:19:11.466378  994709 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 20:19:11.466405  994709 ubuntu.go:190] setting up certificates
	I1002 20:19:11.466416  994709 provision.go:84] configureAuth start
	I1002 20:19:11.466491  994709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-693704
	I1002 20:19:11.484411  994709 provision.go:143] copyHostCerts
	I1002 20:19:11.484497  994709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 20:19:11.484648  994709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 20:19:11.484708  994709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 20:19:11.484757  994709 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.addons-693704 san=[127.0.0.1 192.168.49.2 addons-693704 localhost minikube]
	I1002 20:19:11.600457  994709 provision.go:177] copyRemoteCerts
	I1002 20:19:11.600526  994709 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:19:11.600571  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.617715  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:11.713831  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 20:19:11.731711  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 20:19:11.748544  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:19:11.765398  994709 provision.go:87] duration metric: took 298.94846ms to configureAuth
	I1002 20:19:11.765428  994709 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:19:11.765610  994709 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:19:11.765720  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.782571  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.782895  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:11.782917  994709 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:19:12.024388  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:19:12.024409  994709 machine.go:96] duration metric: took 4.015387209s to provisionDockerMachine
	I1002 20:19:12.024420  994709 client.go:171] duration metric: took 11.035249443s to LocalClient.Create
	I1002 20:19:12.024430  994709 start.go:168] duration metric: took 11.035328481s to libmachine.API.Create "addons-693704"
	I1002 20:19:12.024438  994709 start.go:294] postStartSetup for "addons-693704" (driver="docker")
	I1002 20:19:12.024448  994709 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:19:12.024531  994709 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:19:12.024581  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.046435  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.145575  994709 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:19:12.148535  994709 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:19:12.148564  994709 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:19:12.148574  994709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 20:19:12.148638  994709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 20:19:12.148666  994709 start.go:297] duration metric: took 124.222688ms for postStartSetup
	I1002 20:19:12.148981  994709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-693704
	I1002 20:19:12.164538  994709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/config.json ...
	I1002 20:19:12.164807  994709 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:19:12.164866  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.181186  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.274914  994709 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:19:12.279510  994709 start.go:129] duration metric: took 11.294122752s to createHost
	I1002 20:19:12.279576  994709 start.go:84] releasing machines lock for "addons-693704", held for 11.294297786s
	I1002 20:19:12.279683  994709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-693704
	I1002 20:19:12.298232  994709 ssh_runner.go:195] Run: cat /version.json
	I1002 20:19:12.298284  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.298302  994709 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:19:12.298368  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.327555  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.332727  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.506484  994709 ssh_runner.go:195] Run: systemctl --version
	I1002 20:19:12.512752  994709 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:19:12.553418  994709 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:19:12.557546  994709 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:19:12.557619  994709 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:19:12.586608  994709 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 20:19:12.586633  994709 start.go:496] detecting cgroup driver to use...
	I1002 20:19:12.586667  994709 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:19:12.586718  994709 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:19:12.605523  994709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:19:12.618955  994709 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:19:12.619019  994709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:19:12.636190  994709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:19:12.655245  994709 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:19:12.773294  994709 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:19:12.899674  994709 docker.go:234] disabling docker service ...
	I1002 20:19:12.899796  994709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:19:12.921306  994709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:19:12.935583  994709 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:19:13.058429  994709 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:19:13.191274  994709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:19:13.203980  994709 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:19:13.218083  994709 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:19:13.218172  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.227208  994709 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 20:19:13.227310  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.236115  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.244683  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.253282  994709 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:19:13.260942  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.269710  994709 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.282906  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.291613  994709 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:19:13.298701  994709 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:19:13.306154  994709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:19:13.416108  994709 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:19:13.549800  994709 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:19:13.549963  994709 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:19:13.553947  994709 start.go:564] Will wait 60s for crictl version
	I1002 20:19:13.554015  994709 ssh_runner.go:195] Run: which crictl
	I1002 20:19:13.557729  994709 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:19:13.584434  994709 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:19:13.584598  994709 ssh_runner.go:195] Run: crio --version
	I1002 20:19:13.611885  994709 ssh_runner.go:195] Run: crio --version
	I1002 20:19:13.643761  994709 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:19:13.646706  994709 cli_runner.go:164] Run: docker network inspect addons-693704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:19:13.662159  994709 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:19:13.665953  994709 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:19:13.675384  994709 kubeadm.go:883] updating cluster {Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:19:13.675498  994709 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:19:13.675559  994709 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:19:13.707568  994709 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:19:13.707592  994709 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:19:13.707650  994709 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:19:13.733091  994709 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:19:13.733117  994709 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:19:13.733126  994709 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:19:13.733260  994709 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-693704 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:19:13.733342  994709 ssh_runner.go:195] Run: crio config
	I1002 20:19:13.792130  994709 cni.go:84] Creating CNI manager for ""
	I1002 20:19:13.792153  994709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:19:13.792194  994709 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:19:13.792227  994709 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-693704 NodeName:addons-693704 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:19:13.792401  994709 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-693704"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:19:13.792492  994709 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:19:13.800668  994709 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:19:13.800767  994709 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:19:13.808293  994709 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 20:19:13.821242  994709 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:19:13.834169  994709 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1002 20:19:13.846928  994709 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:19:13.850566  994709 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:19:13.860224  994709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:19:13.968588  994709 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:19:13.985352  994709 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704 for IP: 192.168.49.2
	I1002 20:19:13.985422  994709 certs.go:195] generating shared ca certs ...
	I1002 20:19:13.985470  994709 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:13.985658  994709 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 20:19:15.330293  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt ...
	I1002 20:19:15.330325  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt: {Name:mk4cd3e6dd08eb98d92774a50706472e7144a029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.330529  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key ...
	I1002 20:19:15.330543  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key: {Name:mk973528442a241534dab3b3f10010ef617c41eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.330647  994709 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 20:19:15.997150  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt ...
	I1002 20:19:15.997181  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt: {Name:mk99f3de897f678c1a5844576ab27113951f2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.997373  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key ...
	I1002 20:19:15.997386  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key: {Name:mka357a75cbeebaba7cc94478a077ee2190bafb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.997484  994709 certs.go:257] generating profile certs ...
	I1002 20:19:15.997541  994709 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.key
	I1002 20:19:15.997561  994709 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt with IP's: []
	I1002 20:19:16.185268  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt ...
	I1002 20:19:16.185298  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: {Name:mk19c4790d2aed31a89cf09dcf81ae3f076c409b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.185485  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.key ...
	I1002 20:19:16.185498  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.key: {Name:mk1b58c21fd0fb98ae80d1aeead9a8a2c7b84f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.185581  994709 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d
	I1002 20:19:16.185600  994709 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:19:16.909759  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d ...
	I1002 20:19:16.909792  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d: {Name:mkcdcc8a35d2bead0bc666b364b50007c53b8ec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.910784  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d ...
	I1002 20:19:16.910803  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d: {Name:mk54e705787535bd0f02f9a6cb06ac271457b26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.911454  994709 certs.go:382] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt
	I1002 20:19:16.911552  994709 certs.go:386] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key
	I1002 20:19:16.911609  994709 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key
	I1002 20:19:16.911632  994709 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt with IP's: []
	I1002 20:19:17.189632  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt ...
	I1002 20:19:17.189663  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt: {Name:mkc2967e5b8de8de5ffc244b2174ce7d1307c7fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:17.189855  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key ...
	I1002 20:19:17.189870  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key: {Name:mk3a5d9aa39ed72b68b1236fc674f044b595f3a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:17.190670  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 20:19:17.190720  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 20:19:17.190746  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:19:17.190775  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 20:19:17.191345  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:19:17.209222  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:19:17.228051  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:19:17.245976  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 20:19:17.263876  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 20:19:17.281588  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:19:17.300066  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:19:17.317623  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:19:17.335889  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:19:17.355499  994709 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:19:17.368597  994709 ssh_runner.go:195] Run: openssl version
	I1002 20:19:17.375290  994709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:19:17.383559  994709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:19:17.387356  994709 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:19:17.387462  994709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:19:17.428204  994709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:19:17.436613  994709 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:19:17.440314  994709 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:19:17.440367  994709 kubeadm.go:400] StartCluster: {Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:19:17.440454  994709 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:19:17.440516  994709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:19:17.467595  994709 cri.go:89] found id: ""
	I1002 20:19:17.467677  994709 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:19:17.475494  994709 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:19:17.483312  994709 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:19:17.483390  994709 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:19:17.491411  994709 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:19:17.491431  994709 kubeadm.go:157] found existing configuration files:
	
	I1002 20:19:17.491483  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:19:17.499089  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:19:17.499169  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:19:17.506794  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:19:17.514714  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:19:17.514785  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:19:17.522181  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:19:17.530993  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:19:17.531060  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:19:17.538976  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:19:17.546795  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:19:17.546892  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:19:17.554492  994709 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:19:17.596193  994709 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:19:17.596303  994709 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:19:17.627320  994709 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:19:17.627397  994709 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 20:19:17.627440  994709 kubeadm.go:318] OS: Linux
	I1002 20:19:17.627493  994709 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:19:17.627548  994709 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 20:19:17.627604  994709 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:19:17.627659  994709 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:19:17.627714  994709 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:19:17.627769  994709 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:19:17.627820  994709 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:19:17.627872  994709 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:19:17.627924  994709 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 20:19:17.698891  994709 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:19:17.699015  994709 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:19:17.699132  994709 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:19:17.708645  994709 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:19:17.711822  994709 out.go:252]   - Generating certificates and keys ...
	I1002 20:19:17.711957  994709 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:19:17.712048  994709 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:19:17.858214  994709 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:19:19.472133  994709 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:19:19.853869  994709 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:19:20.278527  994709 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:19:21.038810  994709 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:19:21.039005  994709 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-693704 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:19:21.583298  994709 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:19:21.583465  994709 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-693704 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:19:22.178821  994709 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:19:22.869729  994709 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:19:23.067072  994709 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:19:23.067180  994709 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:19:23.190079  994709 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:19:23.633624  994709 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:19:23.861907  994709 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:19:24.252326  994709 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:19:24.757359  994709 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:19:24.758089  994709 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:19:24.760711  994709 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:19:24.764198  994709 out.go:252]   - Booting up control plane ...
	I1002 20:19:24.764310  994709 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:19:24.764403  994709 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:19:24.764489  994709 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:19:24.780867  994709 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:19:24.781188  994709 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:19:24.788581  994709 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:19:24.789049  994709 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:19:24.789397  994709 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:19:24.926323  994709 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:19:24.926459  994709 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:19:26.427259  994709 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501639322s
	I1002 20:19:26.430848  994709 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:19:26.430969  994709 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:19:26.431069  994709 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:19:26.431155  994709 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:19:28.445585  994709 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.013932999s
	I1002 20:19:30.026061  994709 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.595131543s
	I1002 20:19:31.934100  994709 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.501085496s
	I1002 20:19:31.955369  994709 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 20:19:31.978849  994709 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 20:19:32.006745  994709 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 20:19:32.007240  994709 kubeadm.go:318] [mark-control-plane] Marking the node addons-693704 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 20:19:32.024906  994709 kubeadm.go:318] [bootstrap-token] Using token: 1gg1hv.lld6lawd4ni62mxk
	I1002 20:19:32.028031  994709 out.go:252]   - Configuring RBAC rules ...
	I1002 20:19:32.028186  994709 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 20:19:32.038937  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 20:19:32.049818  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 20:19:32.054935  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 20:19:32.062162  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 20:19:32.070713  994709 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 20:19:32.338182  994709 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 20:19:32.784741  994709 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 20:19:33.338747  994709 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 20:19:33.340165  994709 kubeadm.go:318] 
	I1002 20:19:33.340273  994709 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 20:19:33.340285  994709 kubeadm.go:318] 
	I1002 20:19:33.340381  994709 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 20:19:33.340391  994709 kubeadm.go:318] 
	I1002 20:19:33.340426  994709 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 20:19:33.340507  994709 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 20:19:33.340581  994709 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 20:19:33.340595  994709 kubeadm.go:318] 
	I1002 20:19:33.340666  994709 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 20:19:33.340674  994709 kubeadm.go:318] 
	I1002 20:19:33.340728  994709 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 20:19:33.340734  994709 kubeadm.go:318] 
	I1002 20:19:33.340801  994709 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 20:19:33.340885  994709 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 20:19:33.340967  994709 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 20:19:33.340973  994709 kubeadm.go:318] 
	I1002 20:19:33.341069  994709 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 20:19:33.341173  994709 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 20:19:33.341179  994709 kubeadm.go:318] 
	I1002 20:19:33.341310  994709 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 1gg1hv.lld6lawd4ni62mxk \
	I1002 20:19:33.341442  994709 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 \
	I1002 20:19:33.341466  994709 kubeadm.go:318] 	--control-plane 
	I1002 20:19:33.341470  994709 kubeadm.go:318] 
	I1002 20:19:33.341572  994709 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 20:19:33.341578  994709 kubeadm.go:318] 
	I1002 20:19:33.341672  994709 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 1gg1hv.lld6lawd4ni62mxk \
	I1002 20:19:33.341797  994709 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 
	I1002 20:19:33.345719  994709 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 20:19:33.345963  994709 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 20:19:33.346097  994709 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:19:33.346131  994709 cni.go:84] Creating CNI manager for ""
	I1002 20:19:33.346146  994709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:19:33.349554  994709 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 20:19:33.352542  994709 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 20:19:33.358001  994709 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 20:19:33.358065  994709 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 20:19:33.375272  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 20:19:33.656465  994709 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:19:33.656564  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:33.656619  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-693704 minikube.k8s.io/updated_at=2025_10_02T20_19_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b minikube.k8s.io/name=addons-693704 minikube.k8s.io/primary=true
	I1002 20:19:33.838722  994709 ops.go:34] apiserver oom_adj: -16
	I1002 20:19:33.838894  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:34.339235  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:34.839327  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:35.339115  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:35.839347  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:36.339936  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:36.838951  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:37.339896  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:37.839301  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:37.981403  994709 kubeadm.go:1113] duration metric: took 4.324906426s to wait for elevateKubeSystemPrivileges
	I1002 20:19:37.981430  994709 kubeadm.go:402] duration metric: took 20.541068078s to StartCluster
	I1002 20:19:37.981448  994709 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:37.982146  994709 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:19:37.982540  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:37.982732  994709 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:19:37.982850  994709 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 20:19:37.983086  994709 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:19:37.983116  994709 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 20:19:37.983227  994709 addons.go:69] Setting yakd=true in profile "addons-693704"
	I1002 20:19:37.983240  994709 addons.go:238] Setting addon yakd=true in "addons-693704"
	I1002 20:19:37.983262  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.983805  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.983948  994709 addons.go:69] Setting inspektor-gadget=true in profile "addons-693704"
	I1002 20:19:37.983963  994709 addons.go:238] Setting addon inspektor-gadget=true in "addons-693704"
	I1002 20:19:37.983984  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.984372  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.984784  994709 addons.go:69] Setting metrics-server=true in profile "addons-693704"
	I1002 20:19:37.984803  994709 addons.go:238] Setting addon metrics-server=true in "addons-693704"
	I1002 20:19:37.984846  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.985255  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.986812  994709 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-693704"
	I1002 20:19:37.987111  994709 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-693704"
	I1002 20:19:37.987164  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.986986  994709 addons.go:69] Setting cloud-spanner=true in profile "addons-693704"
	I1002 20:19:37.988662  994709 addons.go:238] Setting addon cloud-spanner=true in "addons-693704"
	I1002 20:19:37.988715  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.989206  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.986995  994709 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-693704"
	I1002 20:19:37.992261  994709 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-693704"
	I1002 20:19:37.992347  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.993008  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.993440  994709 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-693704"
	I1002 20:19:37.993470  994709 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-693704"
	I1002 20:19:37.993496  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.993939  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.986999  994709 addons.go:69] Setting default-storageclass=true in profile "addons-693704"
	I1002 20:19:37.999991  994709 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-693704"
	I1002 20:19:37.987003  994709 addons.go:69] Setting gcp-auth=true in profile "addons-693704"
	I1002 20:19:38.001780  994709 mustload.go:65] Loading cluster: addons-693704
	I1002 20:19:38.002068  994709 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:19:38.002442  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.004847  994709 addons.go:69] Setting registry=true in profile "addons-693704"
	I1002 20:19:38.004895  994709 addons.go:238] Setting addon registry=true in "addons-693704"
	I1002 20:19:38.004938  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.006258  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.987015  994709 addons.go:69] Setting ingress=true in profile "addons-693704"
	I1002 20:19:38.027270  994709 addons.go:238] Setting addon ingress=true in "addons-693704"
	I1002 20:19:38.027361  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.027894  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.987020  994709 addons.go:69] Setting ingress-dns=true in profile "addons-693704"
	I1002 20:19:38.058307  994709 addons.go:238] Setting addon ingress-dns=true in "addons-693704"
	I1002 20:19:38.058379  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.058921  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.096850  994709 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 20:19:38.105676  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 20:19:38.105709  994709 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 20:19:38.105842  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.008072  994709 out.go:179] * Verifying Kubernetes components...
	I1002 20:19:38.008152  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.026483  994709 addons.go:69] Setting registry-creds=true in profile "addons-693704"
	I1002 20:19:38.116211  994709 addons.go:238] Setting addon registry-creds=true in "addons-693704"
	I1002 20:19:38.116261  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.116877  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.148060  994709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:19:38.026500  994709 addons.go:69] Setting storage-provisioner=true in profile "addons-693704"
	I1002 20:19:38.148217  994709 addons.go:238] Setting addon storage-provisioner=true in "addons-693704"
	I1002 20:19:38.148254  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.148800  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.026507  994709 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-693704"
	I1002 20:19:38.181689  994709 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-693704"
	I1002 20:19:38.026527  994709 addons.go:69] Setting volcano=true in profile "addons-693704"
	I1002 20:19:38.185000  994709 addons.go:238] Setting addon volcano=true in "addons-693704"
	I1002 20:19:38.185048  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.200337  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.026533  994709 addons.go:69] Setting volumesnapshots=true in profile "addons-693704"
	I1002 20:19:38.221856  994709 addons.go:238] Setting addon volumesnapshots=true in "addons-693704"
	I1002 20:19:38.221908  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.222576  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.234975  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.241128  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 20:19:38.241462  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.027224  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.264137  994709 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 20:19:38.269034  994709 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 20:19:38.269076  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 20:19:38.269173  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.294256  994709 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 20:19:38.298092  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 20:19:38.298232  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 20:19:38.298258  994709 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 20:19:38.298339  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.305328  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 20:19:38.326652  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:19:38.333498  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:19:38.339026  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 20:19:38.339916  994709 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 20:19:38.340074  994709 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 20:19:38.348717  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 20:19:38.349240  994709 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:19:38.349263  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 20:19:38.349335  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.370496  994709 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 20:19:38.370522  994709 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 20:19:38.370590  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.393413  994709 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:19:38.393443  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 20:19:38.393518  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.393705  994709 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 20:19:38.401523  994709 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:19:38.401566  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 20:19:38.401656  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.415528  994709 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 20:19:38.419444  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 20:19:38.424637  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 20:19:38.430455  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 20:19:38.433425  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 20:19:38.434098  994709 out.go:179]   - Using image docker.io/registry:3.0.0
	W1002 20:19:38.437996  994709 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1002 20:19:38.442715  994709 addons.go:238] Setting addon default-storageclass=true in "addons-693704"
	I1002 20:19:38.442755  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.443165  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.443728  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.447652  994709 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 20:19:38.447679  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 20:19:38.447744  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.463660  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.464460  994709 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:19:38.466815  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 20:19:38.467693  994709 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:19:38.467719  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:19:38.467819  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.470864  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 20:19:38.470890  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 20:19:38.470960  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.500926  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.502016  994709 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 20:19:38.503153  994709 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 20:19:38.510195  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 20:19:38.510222  994709 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 20:19:38.510304  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.511213  994709 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 20:19:38.512545  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.514344  994709 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-693704"
	I1002 20:19:38.514385  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.514794  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.538485  994709 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:19:38.538505  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 20:19:38.538577  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.563237  994709 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:19:38.563266  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 20:19:38.563330  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.573905  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.605278  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.621692  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.637902  994709 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:19:38.637933  994709 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:19:38.638002  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.655698  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.682118  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.689646  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.707346  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.731079  994709 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 20:19:38.738329  994709 out.go:179]   - Using image docker.io/busybox:stable
	I1002 20:19:38.738517  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.739582  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	W1002 20:19:38.741646  994709 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:19:38.741686  994709 retry.go:31] will retry after 354.664397ms: ssh: handshake failed: EOF
	I1002 20:19:38.741822  994709 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:19:38.741834  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 20:19:38.741914  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.754174  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.790638  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	W1002 20:19:38.791850  994709 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:19:38.791874  994709 retry.go:31] will retry after 168.291026ms: ssh: handshake failed: EOF
	I1002 20:19:38.891518  994709 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1002 20:19:38.961324  994709 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:19:38.961355  994709 retry.go:31] will retry after 311.734351ms: ssh: handshake failed: EOF
	I1002 20:19:39.180793  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 20:19:39.180831  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 20:19:39.246769  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 20:19:39.246793  994709 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 20:19:39.317148  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 20:19:39.317174  994709 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 20:19:39.327274  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 20:19:39.369305  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:19:39.371258  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:19:39.386300  994709 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 20:19:39.386327  994709 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 20:19:39.412476  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 20:19:39.412502  994709 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 20:19:39.447295  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:19:39.454691  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 20:19:39.454712  994709 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 20:19:39.483532  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:19:39.489546  994709 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:39.489572  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 20:19:39.600950  994709 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 20:19:39.600977  994709 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 20:19:39.608088  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:19:39.608113  994709 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 20:19:39.625123  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 20:19:39.625149  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 20:19:39.646231  994709 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:19:39.646256  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 20:19:39.666494  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:39.667190  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:19:39.667209  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 20:19:39.670888  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:19:39.686238  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:19:39.763670  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:19:39.778706  994709 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 20:19:39.778734  994709 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 20:19:39.800126  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:19:39.803147  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:19:39.824074  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 20:19:39.824103  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 20:19:39.826926  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:19:39.887787  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:19:39.970247  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 20:19:39.970276  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 20:19:39.982837  994709 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 20:19:39.982863  994709 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 20:19:40.095977  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 20:19:40.096005  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 20:19:40.202267  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 20:19:40.202301  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 20:19:40.252464  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 20:19:40.252492  994709 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 20:19:40.425953  994709 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:19:40.425979  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 20:19:40.440769  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 20:19:40.440793  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 20:19:40.651360  994709 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.759801869s)
	I1002 20:19:40.651360  994709 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.140115117s)
	I1002 20:19:40.651466  994709 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 20:19:40.652113  994709 node_ready.go:35] waiting up to 6m0s for node "addons-693704" to be "Ready" ...
	I1002 20:19:40.708925  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:19:40.740283  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 20:19:40.740311  994709 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 20:19:41.000182  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 20:19:41.000218  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 20:19:41.157742  994709 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-693704" context rescaled to 1 replicas
	I1002 20:19:41.160542  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 20:19:41.160566  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 20:19:41.368904  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 20:19:41.368930  994709 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 20:19:41.434210  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.106899571s)
	I1002 20:19:41.434277  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.064948233s)
	I1002 20:19:41.546392  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1002 20:19:42.681278  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:44.305558  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.934264933s)
	I1002 20:19:44.305591  994709 addons.go:479] Verifying addon ingress=true in "addons-693704"
	I1002 20:19:44.305742  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.858421462s)
	I1002 20:19:44.305803  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.822248913s)
	I1002 20:19:44.306140  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.639611107s)
	W1002 20:19:44.306168  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:44.306190  994709 retry.go:31] will retry after 271.617135ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:44.306249  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.635338018s)
	I1002 20:19:44.306301  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.62003767s)
	I1002 20:19:44.306341  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.542651372s)
	I1002 20:19:44.306505  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.506354272s)
	I1002 20:19:44.306533  994709 addons.go:479] Verifying addon registry=true in "addons-693704"
	I1002 20:19:44.306707  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.503533192s)
	I1002 20:19:44.306720  994709 addons.go:479] Verifying addon metrics-server=true in "addons-693704"
	I1002 20:19:44.306759  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.479800741s)
	I1002 20:19:44.307143  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.41932494s)
	I1002 20:19:44.307220  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.598267016s)
	W1002 20:19:44.307774  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:19:44.307787  994709 retry.go:31] will retry after 292.505551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:19:44.308765  994709 out.go:179] * Verifying ingress addon...
	I1002 20:19:44.312945  994709 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-693704 service yakd-dashboard -n yakd-dashboard
	
	I1002 20:19:44.313054  994709 out.go:179] * Verifying registry addon...
	I1002 20:19:44.315485  994709 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 20:19:44.317462  994709 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 20:19:44.330428  994709 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 20:19:44.330450  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:44.330653  994709 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 20:19:44.330663  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 20:19:44.357589  994709 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 20:19:44.577967  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:44.601481  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:19:44.645691  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.099252349s)
	I1002 20:19:44.645728  994709 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-693704"
	I1002 20:19:44.650504  994709 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 20:19:44.655039  994709 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 20:19:44.667816  994709 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 20:19:44.667846  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:44.821715  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:44.822383  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 20:19:45.161026  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:45.165268  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:45.325696  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:45.325851  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:45.657820  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:45.818501  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:45.820022  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:45.829170  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.25115874s)
	W1002 20:19:45.829204  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:45.829226  994709 retry.go:31] will retry after 265.136863ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:45.829298  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.227785836s)
	I1002 20:19:45.919439  994709 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 20:19:45.919542  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:45.937711  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:46.064145  994709 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 20:19:46.077198  994709 addons.go:238] Setting addon gcp-auth=true in "addons-693704"
	I1002 20:19:46.077246  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:46.077691  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:46.095085  994709 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 20:19:46.095135  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:46.095095  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:46.123058  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:46.164369  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:46.319756  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:46.321805  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:46.659517  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:46.818237  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:46.819904  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 20:19:46.919069  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:46.919103  994709 retry.go:31] will retry after 624.133237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:46.922816  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:19:46.925777  994709 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 20:19:46.928684  994709 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 20:19:46.928707  994709 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 20:19:46.942491  994709 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 20:19:46.942514  994709 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 20:19:46.955438  994709 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:19:46.955505  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 20:19:46.968124  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:19:47.157960  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:47.322368  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:47.322695  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:47.436497  994709 addons.go:479] Verifying addon gcp-auth=true in "addons-693704"
	I1002 20:19:47.440771  994709 out.go:179] * Verifying gcp-auth addon...
	I1002 20:19:47.444303  994709 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 20:19:47.456952  994709 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 20:19:47.457022  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:47.544036  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:19:47.655544  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:47.657853  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:47.819482  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:47.821740  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:47.947877  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:48.158799  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:48.321611  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:48.322176  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 20:19:48.351318  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:48.351351  994709 retry.go:31] will retry after 722.588456ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:48.447412  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:48.658545  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:48.819500  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:48.821008  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:48.947811  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:49.074176  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:49.159044  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:49.319369  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:49.321354  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:49.447565  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:49.655967  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:49.657396  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:49.821534  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:49.821767  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 20:19:49.880261  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:49.880299  994709 retry.go:31] will retry after 823.045422ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:49.948030  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:50.158812  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:50.318859  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:50.321025  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:50.448207  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:50.657430  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:50.703742  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:50.819118  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:50.821057  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:50.948368  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:51.157785  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:51.320463  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:51.321544  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:51.448039  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:51.519077  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:51.519109  994709 retry.go:31] will retry after 1.329942428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:51.658147  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:51.820515  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:51.820951  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:51.947804  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:52.155980  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:52.158167  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:52.319637  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:52.321091  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:52.448243  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:52.657697  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:52.819249  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:52.821572  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:52.849787  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:52.949420  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:53.160825  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:53.319057  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:53.321137  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:53.448348  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:53.651601  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:53.651634  994709 retry.go:31] will retry after 4.065518596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:53.657468  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:53.820524  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:53.821033  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:53.948075  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:54.157447  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:54.158479  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:54.318431  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:54.320091  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:54.447825  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:54.657905  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:54.819025  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:54.820709  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:54.947593  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:55.158249  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:55.320256  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:55.320691  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:55.447448  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:55.658171  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:55.820678  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:55.821069  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:55.948074  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:56.157411  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:56.319659  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:56.320449  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:56.447640  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:56.655854  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:56.657871  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:56.818780  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:56.820792  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:56.947591  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:57.157766  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:57.318816  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:57.320927  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:57.447823  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:57.657501  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:57.717603  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:57.820669  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:57.822065  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:57.948192  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:58.157875  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:58.319486  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:58.321536  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:58.447507  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:58.508047  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:58.508078  994709 retry.go:31] will retry after 6.392155287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:58.657525  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:58.818599  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:58.820265  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:58.947800  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:59.155950  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:59.158057  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:59.321355  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:59.321502  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:59.447568  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:59.657515  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:59.818527  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:59.820423  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:59.947158  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:00.191965  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:00.322779  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:00.323712  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:00.462450  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:00.662487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:00.820978  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:00.821119  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:00.947103  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:01.165936  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:01.319105  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:01.321152  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:01.448705  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:01.656452  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:01.660465  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:01.820149  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:01.822237  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:01.949425  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:02.159485  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:02.320094  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:02.320855  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:02.447847  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:02.658087  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:02.822950  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:02.823232  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:02.948025  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:03.158590  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:03.318905  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:03.321355  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:03.447723  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:03.657871  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:03.821238  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:03.821662  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:03.947536  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:04.157181  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:04.158586  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:04.319406  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:04.320569  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:04.448026  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:04.657883  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:04.821087  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:04.821316  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:04.900418  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:04.947850  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:05.159494  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:05.319260  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:05.321183  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:05.448018  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:05.659872  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 20:20:05.704226  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:05.704266  994709 retry.go:31] will retry after 4.650395594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:05.819910  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:05.820237  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:05.947300  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:06.157427  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:06.158681  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:06.319989  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:06.321509  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:06.447503  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:06.658321  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:06.819075  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:06.820269  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:06.948556  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:07.158188  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:07.319456  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:07.320273  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:07.448160  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:07.657768  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:07.820523  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:07.821011  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:07.947761  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:08.157867  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:08.323022  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:08.323328  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:08.447949  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:08.655164  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:08.657821  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:08.820915  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:08.822270  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:08.947285  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:09.157631  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:09.319269  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:09.320630  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:09.447999  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:09.657541  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:09.821314  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:09.821825  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:09.947519  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:10.158695  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:10.320550  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:10.322127  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:10.355287  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:10.448320  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:10.655677  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:10.658684  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:10.819582  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:10.820893  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:10.948135  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:11.160067  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 20:20:11.205481  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:11.205529  994709 retry.go:31] will retry after 8.886793783s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:11.319286  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:11.320699  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:11.447959  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:11.658932  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:11.818675  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:11.820427  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:11.947127  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:12.157818  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:12.319903  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:12.320793  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:12.447987  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:12.657853  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:12.819021  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:12.820692  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:12.947551  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:13.156319  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:13.159173  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:13.319051  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:13.321143  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:13.448160  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:13.657596  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:13.820773  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:13.820960  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:13.948072  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:14.158231  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:14.319445  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:14.320543  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:14.447788  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:14.658082  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:14.819689  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:14.821091  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:14.948202  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:15.157836  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:15.319547  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:15.321065  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:15.448065  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:15.654975  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:15.658703  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:15.819187  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:15.823588  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:15.947274  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:16.158585  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:16.318872  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:16.321029  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:16.448029  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:16.658178  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:16.819331  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:16.819902  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:16.947835  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:17.158511  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:17.319014  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:17.320821  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:17.447892  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:17.658439  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:17.818480  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:17.820595  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:17.947741  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:18.157451  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:18.159031  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:18.320870  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:18.321273  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:18.448214  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:18.658565  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:18.819116  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:18.821998  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:18.948071  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:19.175178  994709 node_ready.go:49] node "addons-693704" is "Ready"
	I1002 20:20:19.175210  994709 node_ready.go:38] duration metric: took 38.523057861s for node "addons-693704" to be "Ready" ...
	I1002 20:20:19.175224  994709 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:20:19.175288  994709 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:19.193541  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:19.198169  994709 api_server.go:72] duration metric: took 41.215410635s to wait for apiserver process to appear ...
	I1002 20:20:19.198244  994709 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:20:19.198278  994709 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 20:20:19.210833  994709 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 20:20:19.213021  994709 api_server.go:141] control plane version: v1.34.1
	I1002 20:20:19.213118  994709 api_server.go:131] duration metric: took 14.852434ms to wait for apiserver health ...
	I1002 20:20:19.213143  994709 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:20:19.259918  994709 system_pods.go:59] 18 kube-system pods found
	I1002 20:20:19.260007  994709 system_pods.go:61] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending
	I1002 20:20:19.260029  994709 system_pods.go:61] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending
	I1002 20:20:19.260046  994709 system_pods.go:61] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending
	I1002 20:20:19.260082  994709 system_pods.go:61] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:19.260110  994709 system_pods.go:61] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:19.260130  994709 system_pods.go:61] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:19.260165  994709 system_pods.go:61] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:19.260195  994709 system_pods.go:61] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 20:20:19.260219  994709 system_pods.go:61] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:19.260254  994709 system_pods.go:61] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:19.260278  994709 system_pods.go:61] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending
	I1002 20:20:19.260300  994709 system_pods.go:61] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending
	I1002 20:20:19.260337  994709 system_pods.go:61] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending
	I1002 20:20:19.260361  994709 system_pods.go:61] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending
	I1002 20:20:19.260379  994709 system_pods.go:61] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending
	I1002 20:20:19.260414  994709 system_pods.go:61] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending
	I1002 20:20:19.260436  994709 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending
	I1002 20:20:19.260455  994709 system_pods.go:61] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending
	I1002 20:20:19.260473  994709 system_pods.go:74] duration metric: took 47.310617ms to wait for pod list to return data ...
	I1002 20:20:19.260513  994709 default_sa.go:34] waiting for default service account to be created ...
	I1002 20:20:19.273557  994709 default_sa.go:45] found service account: "default"
	I1002 20:20:19.273635  994709 default_sa.go:55] duration metric: took 13.103031ms for default service account to be created ...
	I1002 20:20:19.273660  994709 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 20:20:19.293816  994709 system_pods.go:86] 18 kube-system pods found
	I1002 20:20:19.293898  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending
	I1002 20:20:19.293920  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending
	I1002 20:20:19.293938  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending
	I1002 20:20:19.293975  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:19.294002  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:19.294023  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:19.294068  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:19.294095  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending
	I1002 20:20:19.294114  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:19.294148  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:19.294173  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending
	I1002 20:20:19.294198  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending
	I1002 20:20:19.294246  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending
	I1002 20:20:19.294273  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending
	I1002 20:20:19.294296  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending
	I1002 20:20:19.294328  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending
	I1002 20:20:19.294351  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending
	I1002 20:20:19.294370  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending
	I1002 20:20:19.294416  994709 retry.go:31] will retry after 259.220758ms: missing components: kube-dns
	I1002 20:20:19.349532  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:19.350103  994709 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 20:20:19.350175  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:19.523669  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:19.643831  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:19.643867  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending
	I1002 20:20:19.643879  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:19.643887  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:19.643893  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending
	I1002 20:20:19.643899  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:19.643904  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:19.643909  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:19.643918  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:19.643923  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending
	I1002 20:20:19.643931  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:19.643935  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:19.643940  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending
	I1002 20:20:19.643944  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending
	I1002 20:20:19.643948  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending
	I1002 20:20:19.643961  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending
	I1002 20:20:19.643965  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending
	I1002 20:20:19.643972  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:19.643980  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending
	I1002 20:20:19.643985  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending
	I1002 20:20:19.644006  994709 retry.go:31] will retry after 341.024008ms: missing components: kube-dns
	I1002 20:20:19.671892  994709 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 20:20:19.671917  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:19.827024  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:19.828000  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:19.961916  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:20.012275  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:20.012323  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:20:20.012334  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:20.012342  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:20.012350  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:20.012356  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:20.012362  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:20.012372  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:20.012377  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:20.012388  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:20.012400  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:20.012405  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:20.012412  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:20.012423  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:20.012429  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:20.012437  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:20.012448  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:20.012455  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.012463  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.012473  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 20:20:20.012491  994709 retry.go:31] will retry after 476.605934ms: missing components: kube-dns
	I1002 20:20:20.092973  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:20.160870  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:20.323333  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:20.326140  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:20.449179  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:20.500973  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:20.501060  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:20:20.501104  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:20.501129  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:20.501166  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:20.501192  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:20.501214  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:20.501249  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:20.501273  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:20.501296  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:20.501332  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:20.501358  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:20.501381  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:20.501417  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:20.501444  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:20.501467  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:20.501502  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:20.501531  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.501554  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.501589  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Running
	I1002 20:20:20.501625  994709 retry.go:31] will retry after 439.708141ms: missing components: kube-dns
	I1002 20:20:20.672849  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:20.819664  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:20.823622  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:20.948959  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:20.951441  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:20.951521  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:20:20.951545  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:20.951570  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:20.951663  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:20.951686  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:20.951728  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:20.951751  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:20.951769  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:20.951805  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:20.951826  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:20.951847  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:20.951883  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:20.951908  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:20.951932  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:20.951970  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:20.951997  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:20.952021  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.952055  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.952078  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Running
	I1002 20:20:20.952108  994709 retry.go:31] will retry after 739.124115ms: missing components: kube-dns
	I1002 20:20:21.175706  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:21.321496  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:21.322173  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:21.447868  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:21.558307  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.465295653s)
	W1002 20:20:21.558346  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:21.558363  994709 retry.go:31] will retry after 14.276526589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:21.659390  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:21.696852  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:21.696889  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Running
	I1002 20:20:21.696903  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:21.696912  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:21.696919  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:21.696928  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:21.696933  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:21.696952  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:21.696957  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:21.696969  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:21.696973  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:21.696977  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:21.696984  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:21.696990  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:21.696997  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:21.697004  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:21.697010  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:21.697017  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:21.697023  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:21.697030  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Running
	I1002 20:20:21.697039  994709 system_pods.go:126] duration metric: took 2.42335813s to wait for k8s-apps to be running ...
	I1002 20:20:21.697049  994709 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:20:21.697109  994709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:20:21.712608  994709 system_svc.go:56] duration metric: took 15.548645ms WaitForService to wait for kubelet
	I1002 20:20:21.712637  994709 kubeadm.go:586] duration metric: took 43.729883809s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:20:21.712662  994709 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:20:21.716152  994709 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 20:20:21.716184  994709 node_conditions.go:123] node cpu capacity is 2
	I1002 20:20:21.716196  994709 node_conditions.go:105] duration metric: took 3.528491ms to run NodePressure ...
	I1002 20:20:21.716212  994709 start.go:242] waiting for startup goroutines ...
	I1002 20:20:21.822012  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:21.823203  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:21.948612  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:22.159863  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:22.319122  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:22.321160  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:22.448407  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:22.659710  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:22.819576  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:22.822386  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:22.948013  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:23.158517  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:23.320332  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:23.321199  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:23.448043  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:23.658814  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:23.819698  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:23.821542  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:23.947452  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:24.159652  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:24.320759  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:24.321094  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:24.448153  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:24.659358  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:24.818645  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:24.821517  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:24.947484  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:25.159952  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:25.321433  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:25.321885  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:25.447985  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:25.658784  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:25.819014  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:25.821666  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:25.948082  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:26.158745  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:26.320197  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:26.321222  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:26.447719  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:26.659182  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:26.820428  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:26.822051  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:26.948367  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:27.160977  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:27.320573  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:27.321652  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:27.447890  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:27.658939  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:27.818985  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:27.821059  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:27.948366  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:28.161780  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:28.320321  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:28.321410  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:28.447506  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:28.658747  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:28.818976  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:28.821650  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:28.947845  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:29.159622  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:29.319270  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:29.321801  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:29.448168  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:29.658794  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:29.819079  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:29.821429  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:29.947641  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:30.159369  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:30.321561  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:30.321972  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:30.450696  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:30.659510  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:30.819828  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:30.821734  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:30.948076  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:31.159094  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:31.321697  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:31.322081  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:31.448086  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:31.658821  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:31.818887  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:31.821458  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:31.947963  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:32.159614  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:32.320675  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:32.322256  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:32.447303  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:32.659923  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:32.820647  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:32.822321  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:32.947394  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:33.159274  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:33.321237  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:33.321628  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:33.448072  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:33.658574  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:33.818908  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:33.821510  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:33.947671  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:34.159537  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:34.319486  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:34.320732  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:34.447992  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:34.659409  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:34.818851  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:34.821162  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:34.948557  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:35.160400  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:35.319095  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:35.321775  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:35.448790  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:35.659951  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:35.821520  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:35.823605  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:35.835876  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:35.949194  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:36.163303  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:36.369184  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:36.369321  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:36.447567  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:36.659548  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:36.819011  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:36.821548  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:36.947353  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:37.013829  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.177912886s)
	W1002 20:20:37.013873  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:37.013894  994709 retry.go:31] will retry after 16.584617559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:37.159246  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:37.320047  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:37.320218  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:37.447625  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:37.659969  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:37.819508  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:37.822005  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:37.948056  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:38.159157  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:38.319619  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:38.321829  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:38.448325  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:38.659094  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:38.819553  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:38.822084  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:38.948224  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:39.158955  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:39.320358  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:39.321896  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:39.449482  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:39.658678  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:39.819596  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:39.822618  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:39.948042  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:40.159165  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:40.321897  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:40.322102  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:40.448692  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:40.659424  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:40.820442  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:40.822438  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:40.953063  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:41.160230  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:41.324908  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:41.325018  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:41.448365  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:41.659981  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:41.819204  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:41.825800  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:41.948326  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:42.160221  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:42.323678  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:42.323892  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:42.448685  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:42.658968  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:42.820548  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:42.825595  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:42.948014  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:43.164487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:43.325308  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:43.325546  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:43.447728  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:43.659083  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:43.819978  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:43.821102  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:43.948368  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:44.159319  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:44.322007  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:44.323000  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:44.448438  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:44.658701  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:44.818251  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:44.822093  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:44.948073  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:45.161234  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:45.337364  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:45.337615  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:45.448555  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:45.659203  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:45.820630  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:45.822020  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:45.948309  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:46.158793  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:46.322305  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:46.323889  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:46.449028  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:46.658214  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:46.821838  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:46.822319  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:46.948024  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:47.168388  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:47.319302  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:47.321739  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:47.447694  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:47.659702  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:47.818326  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:47.821106  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:47.948063  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:48.159478  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:48.321404  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:48.321977  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:48.448403  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:48.658631  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:48.818698  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:48.820834  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:48.947578  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:49.159437  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:49.321139  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:49.321707  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:49.447554  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:49.659009  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:49.819029  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:49.821580  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:49.947616  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:50.160129  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:50.320228  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:50.321534  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:50.451851  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:50.660002  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:50.820960  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:50.822905  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:50.947934  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:51.161193  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:51.320670  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:51.320931  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:51.447529  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:51.672034  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:51.823387  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:51.823949  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:51.949349  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:52.159584  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:52.321246  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:52.323112  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:52.450831  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:52.661759  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:52.819601  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:52.822812  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:52.948147  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:53.158260  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:53.320954  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:53.321416  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:53.447684  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:53.598745  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:53.658921  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:53.822095  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:53.822140  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:53.948027  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:54.159720  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:54.319139  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:54.323475  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:54.449052  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:54.659950  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:54.800080  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.201286682s)
	W1002 20:20:54.800158  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:54.800190  994709 retry.go:31] will retry after 36.238432013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:54.821361  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:54.822118  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:54.948234  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:55.160177  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:55.319580  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:55.323520  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:55.447562  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:55.659028  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:55.820055  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:55.822888  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:55.948043  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:56.160147  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:56.320399  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:56.322153  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:56.448568  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:56.662690  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:56.822552  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:56.822724  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:56.948654  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:57.165959  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:57.323611  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:57.324125  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:57.448839  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:57.659243  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:57.827311  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:57.827796  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:57.951325  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:58.160073  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:58.325194  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:58.325637  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:58.449778  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:58.663289  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:58.823656  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:58.824142  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:58.951729  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:59.159992  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:59.320856  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:59.322241  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:59.451389  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:59.659448  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:59.824351  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:59.824752  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:59.948244  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:00.178734  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:00.334811  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:00.335334  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:00.449977  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:00.660186  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:00.819874  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:00.820185  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:00.948376  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:01.159525  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:01.325608  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:01.326800  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:01.448685  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:01.660941  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:01.819636  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:01.822396  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:01.947837  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:02.160841  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:02.319889  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:02.323200  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:02.447592  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:02.663926  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:02.819507  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:02.822454  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:02.948180  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:03.158836  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:03.320854  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:03.322443  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:03.447975  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:03.658196  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:03.823965  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:03.824515  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:03.947809  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:04.160130  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:04.319792  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:04.320970  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:04.458399  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:04.659641  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:04.819337  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:04.821346  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:04.948487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:05.159402  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:05.318537  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:05.320782  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:05.447768  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:05.659047  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:05.820074  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:05.821224  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:05.948044  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:06.158918  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:06.319264  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:06.321170  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:06.448425  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:06.661071  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:06.819015  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:06.821112  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:06.948418  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:07.159287  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:07.320880  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:07.322732  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:07.448299  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:07.659089  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:07.833876  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:07.834240  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:07.948415  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:08.158976  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:08.320300  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:08.320874  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:08.448633  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:08.659076  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:08.820477  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:08.820621  994709 kapi.go:107] duration metric: took 1m24.50316116s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 20:21:08.948034  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:09.158956  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:09.319324  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:09.447625  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:09.660083  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:09.826440  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:09.949323  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:10.163992  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:10.320103  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:10.449195  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:10.658029  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:10.843087  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:10.948535  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:11.159397  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:11.319712  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:11.447769  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:11.659756  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:11.819109  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:11.947822  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:12.159549  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:12.319206  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:12.446918  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:12.658927  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:12.824411  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:12.947802  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:13.159449  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:13.318706  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:13.454138  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:13.658608  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:13.819013  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:13.948036  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:14.159253  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:14.319616  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:14.449075  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:14.662100  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:14.824454  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:14.950365  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:15.161131  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:15.319196  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:15.447530  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:15.663409  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:15.820874  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:15.953095  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:16.165487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:16.319583  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:16.448606  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:16.659953  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:16.819503  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:16.975219  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:17.158372  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:17.318879  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:17.448192  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:17.658937  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:17.820351  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:17.947275  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:18.158790  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:18.319421  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:18.447567  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:18.659923  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:18.822375  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:18.947862  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:19.159020  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:19.319073  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:19.447850  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:19.659710  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:19.818515  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:19.947671  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:20.160392  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:20.318657  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:20.448137  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:20.660115  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:20.819099  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:20.951129  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:21.160373  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:21.325467  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:21.449746  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:21.659955  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:21.819131  994709 kapi.go:107] duration metric: took 1m37.503635731s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 20:21:21.948370  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:22.158762  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:22.447738  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:22.658570  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:22.949101  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:23.158220  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:23.451919  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:23.658790  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:23.948375  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:24.159201  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:24.449117  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:24.659750  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:24.948295  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:25.160000  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:25.448116  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:25.658136  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:25.948058  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:26.158569  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:26.447775  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:26.658964  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:26.948377  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:27.159144  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:27.448069  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:27.658935  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:27.955751  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:28.159540  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:28.448912  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:28.662299  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:28.947885  994709 kapi.go:107] duration metric: took 1m41.503580566s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 20:21:28.951140  994709 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-693704 cluster.
	I1002 20:21:28.954142  994709 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 20:21:28.956995  994709 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 20:21:29.159855  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:29.664073  994709 kapi.go:107] duration metric: took 1m45.009034533s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 20:21:31.039676  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:21:31.852592  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:21:31.852690  994709 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:21:31.856656  994709 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1002 20:21:31.859688  994709 addons.go:514] duration metric: took 1m53.876564642s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin nvidia-device-plugin storage-provisioner ingress-dns metrics-server registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1002 20:21:31.859739  994709 start.go:247] waiting for cluster config update ...
	I1002 20:21:31.859761  994709 start.go:256] writing updated cluster config ...
	I1002 20:21:31.860060  994709 ssh_runner.go:195] Run: rm -f paused
	I1002 20:21:31.863547  994709 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:21:31.867571  994709 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4kbq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.872068  994709 pod_ready.go:94] pod "coredns-66bc5c9577-4kbq4" is "Ready"
	I1002 20:21:31.872092  994709 pod_ready.go:86] duration metric: took 4.493776ms for pod "coredns-66bc5c9577-4kbq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.874237  994709 pod_ready.go:83] waiting for pod "etcd-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.878256  994709 pod_ready.go:94] pod "etcd-addons-693704" is "Ready"
	I1002 20:21:31.878280  994709 pod_ready.go:86] duration metric: took 4.022961ms for pod "etcd-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.880276  994709 pod_ready.go:83] waiting for pod "kube-apiserver-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.885189  994709 pod_ready.go:94] pod "kube-apiserver-addons-693704" is "Ready"
	I1002 20:21:31.885218  994709 pod_ready.go:86] duration metric: took 4.915919ms for pod "kube-apiserver-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.887484  994709 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:32.267515  994709 pod_ready.go:94] pod "kube-controller-manager-addons-693704" is "Ready"
	I1002 20:21:32.267553  994709 pod_ready.go:86] duration metric: took 380.043461ms for pod "kube-controller-manager-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:32.468152  994709 pod_ready.go:83] waiting for pod "kube-proxy-gdxqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:32.869233  994709 pod_ready.go:94] pod "kube-proxy-gdxqs" is "Ready"
	I1002 20:21:32.869266  994709 pod_ready.go:86] duration metric: took 401.082172ms for pod "kube-proxy-gdxqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:33.067662  994709 pod_ready.go:83] waiting for pod "kube-scheduler-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:33.469284  994709 pod_ready.go:94] pod "kube-scheduler-addons-693704" is "Ready"
	I1002 20:21:33.469361  994709 pod_ready.go:86] duration metric: took 401.671243ms for pod "kube-scheduler-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:33.469380  994709 pod_ready.go:40] duration metric: took 1.605801066s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:21:33.530905  994709 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 20:21:33.534526  994709 out.go:179] * Done! kubectl is now configured to use "addons-693704" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 20:33:28 addons-693704 crio[828]: time="2025-10-02T20:33:28.223494794Z" level=info msg="Removed container d4dc76ca5050e19eda318c93ad0e8759205e556d8093bef5d65c129800178fd5: kube-system/registry-creds-764b6fb674-6cg6b/registry-creds" id=18d5cfaa-e242-4a98-8197-be4ef500bce6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 20:33:49 addons-693704 crio[828]: time="2025-10-02T20:33:49.846812556Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=6f2c8b4c-44f9-4fba-a6d7-ec61d6ee4f06 name=/runtime.v1.ImageService/PullImage
	Oct 02 20:33:49 addons-693704 crio[828]: time="2025-10-02T20:33:49.850744568Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Oct 02 20:34:18 addons-693704 crio[828]: time="2025-10-02T20:34:18.748687339Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=ed7fca3c-67fa-497f-b874-af32fe4ed6dd name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:34:18 addons-693704 crio[828]: time="2025-10-02T20:34:18.75078679Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=4418e23e-e13f-4458-b595-d6e07d14b4fd name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:34:18 addons-693704 crio[828]: time="2025-10-02T20:34:18.752073469Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-6cg6b/registry-creds" id=152f796c-dd86-4ea8-8d48-90653fcaa6d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:18 addons-693704 crio[828]: time="2025-10-02T20:34:18.752330193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:34:18 addons-693704 crio[828]: time="2025-10-02T20:34:18.760585617Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:34:18 addons-693704 crio[828]: time="2025-10-02T20:34:18.761165902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:34:18 addons-693704 crio[828]: time="2025-10-02T20:34:18.800746713Z" level=info msg="Created container d67f8f41794c8f8604b97bc25a552be0c4f2e4321639562194b4e10f9bc9b24d: kube-system/registry-creds-764b6fb674-6cg6b/registry-creds" id=152f796c-dd86-4ea8-8d48-90653fcaa6d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:18 addons-693704 crio[828]: time="2025-10-02T20:34:18.802245506Z" level=info msg="Starting container: d67f8f41794c8f8604b97bc25a552be0c4f2e4321639562194b4e10f9bc9b24d" id=f87f8f6b-28fa-471c-8311-c7a6f79614b8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 20:34:18 addons-693704 crio[828]: time="2025-10-02T20:34:18.808893209Z" level=info msg="Started container" PID=9062 containerID=d67f8f41794c8f8604b97bc25a552be0c4f2e4321639562194b4e10f9bc9b24d description=kube-system/registry-creds-764b6fb674-6cg6b/registry-creds id=f87f8f6b-28fa-471c-8311-c7a6f79614b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1ac1bdd8cfb6ce143232e9709a4d23a878330c5b75aa46c8b65aa1ed50620076
	Oct 02 20:34:18 addons-693704 conmon[9060]: conmon d67f8f41794c8f8604b9 <ninfo>: container 9062 exited with status 1
	Oct 02 20:34:19 addons-693704 crio[828]: time="2025-10-02T20:34:19.379336109Z" level=info msg="Removing container: 1313bfcaef39319df5198eec984426aaf58ab9a1c4fbe3c14a7c6bc9d9b20dac" id=bee139f4-efa7-4f3b-9f88-0b9a6ae7b66c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 20:34:19 addons-693704 crio[828]: time="2025-10-02T20:34:19.387665911Z" level=info msg="Error loading conmon cgroup of container 1313bfcaef39319df5198eec984426aaf58ab9a1c4fbe3c14a7c6bc9d9b20dac: cgroup deleted" id=bee139f4-efa7-4f3b-9f88-0b9a6ae7b66c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 20:34:19 addons-693704 crio[828]: time="2025-10-02T20:34:19.392343374Z" level=info msg="Removed container 1313bfcaef39319df5198eec984426aaf58ab9a1c4fbe3c14a7c6bc9d9b20dac: kube-system/registry-creds-764b6fb674-6cg6b/registry-creds" id=bee139f4-efa7-4f3b-9f88-0b9a6ae7b66c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 20:34:20 addons-693704 crio[828]: time="2025-10-02T20:34:20.129937087Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Oct 02 20:34:50 addons-693704 crio[828]: time="2025-10-02T20:34:50.41416232Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=917afc3d-354b-4d41-99d0-dbbf85124b85 name=/runtime.v1.ImageService/PullImage
	Oct 02 20:34:50 addons-693704 crio[828]: time="2025-10-02T20:34:50.417589492Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 02 20:34:50 addons-693704 crio[828]: time="2025-10-02T20:34:50.482755769Z" level=info msg="Stopping pod sandbox: 3f9a826ea130b098b83b12376b5895a77fc875a8bb5658ca72bd2072a10ac723" id=6c34a816-76d4-4d9a-9d5b-539986d0f307 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 20:34:50 addons-693704 crio[828]: time="2025-10-02T20:34:50.483209149Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84 Namespace:local-path-storage ID:3f9a826ea130b098b83b12376b5895a77fc875a8bb5658ca72bd2072a10ac723 UID:23d0112c-1500-48bc-88ee-772123edd79c NetNS:/var/run/netns/091a9f0e-a82a-4842-a2ec-d67c7fd7480e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cfd0}] Aliases:map[]}"
	Oct 02 20:34:50 addons-693704 crio[828]: time="2025-10-02T20:34:50.483382929Z" level=info msg="Deleting pod local-path-storage_helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84 from CNI network \"kindnet\" (type=ptp)"
	Oct 02 20:34:50 addons-693704 crio[828]: time="2025-10-02T20:34:50.518439232Z" level=info msg="Stopped pod sandbox: 3f9a826ea130b098b83b12376b5895a77fc875a8bb5658ca72bd2072a10ac723" id=6c34a816-76d4-4d9a-9d5b-539986d0f307 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 20:35:20 addons-693704 crio[828]: time="2025-10-02T20:35:20.691576594Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 02 20:35:22 addons-693704 crio[828]: time="2025-10-02T20:35:22.927713379Z" level=info msg="Trying to access \"docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	d67f8f41794c8       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             About a minute ago   Exited              registry-creds                           4                   1ac1bdd8cfb6c       registry-creds-764b6fb674-6cg6b            kube-system
	0bc9f0d1b235e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          13 minutes ago       Running             busybox                                  0                   a4b1fc9c97e53       busybox                                    default
	6928dd54cd320       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          14 minutes ago       Running             csi-snapshotter                          0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	40761b95b2196       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 14 minutes ago       Running             gcp-auth                                 0                   9c1545073abea       gcp-auth-78565c9fb4-27djq                  gcp-auth
	8860f0e019516       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          14 minutes ago       Running             csi-provisioner                          0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	36c49020464e2       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            14 minutes ago       Running             liveness-probe                           0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	b7161126faae3       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           14 minutes ago       Running             hostpath                                 0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	b2b0003c8ca36       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             14 minutes ago       Running             controller                               0                   3a08c5d217c56       ingress-nginx-controller-9cc49f96f-9frwt   ingress-nginx
	2852575f20001       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:74b72c3673aff7e1fa7c3ebae80b5dbe5446ce1906ef8d4f98d4b9f6e72c88e1                            14 minutes ago       Running             gadget                                   0                   34878d06228a7       gadget-gljs2                               gadget
	ee97eb0b32c7f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                14 minutes ago       Running             node-driver-registrar                    0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	e42d2c0b7778e       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             14 minutes ago       Running             local-path-provisioner                   0                   b4f667a1ce299       local-path-provisioner-648f6765c9-v6khh    local-path-storage
	fc0714b2fd72f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              14 minutes ago       Running             registry-proxy                           0                   c8535afb414d5       registry-proxy-2kw45                       kube-system
	bca1297af7427       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   14 minutes ago       Exited              patch                                    0                   e925887ddf0d9       ingress-nginx-admission-patch-v6xpn        ingress-nginx
	627ce890f2b48       gcr.io/cloud-spanner-emulator/emulator@sha256:77d0cd8103fe32875bbb04c070a7d1db292093b65d11c99c00cf39e8a13852f5                               14 minutes ago       Running             cloud-spanner-emulator                   0                   49dda3c4634a4       cloud-spanner-emulator-85f6b7fc65-5wsmw    default
	16f4af5cddb75       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           14 minutes ago       Running             registry                                 0                   4bae41325f3f5       registry-66898fdd98-8rftt                  kube-system
	91fa943497ee5       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        14 minutes ago       Running             metrics-server                           0                   27cb63141e106       metrics-server-85b7d694d7-8pl6l            kube-system
	439510daf689e       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               14 minutes ago       Running             minikube-ingress-dns                     0                   e547aac4b280e       kube-ingress-dns-minikube                  kube-system
	063fa56393267       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              14 minutes ago       Running             csi-resizer                              0                   20ac69c0a7e28       csi-hostpath-resizer-0                     kube-system
	948a7498f368d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   14 minutes ago       Running             csi-external-health-monitor-controller   0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	bbd0c0fdbe948       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             14 minutes ago       Running             csi-attacher                             0                   e6f6a7809eb96       csi-hostpath-attacher-0                    kube-system
	697e9a6f92fb8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   14 minutes ago       Exited              create                                   0                   ec9abb5f653b7       ingress-nginx-admission-create-fndzf       ingress-nginx
	4a5b5d50e1426       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     14 minutes ago       Running             nvidia-device-plugin-ctr                 0                   ae9275c193e86       nvidia-device-plugin-daemonset-jblz6       kube-system
	4757a91ace2d4       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      15 minutes ago       Running             volume-snapshot-controller               0                   7cb6188e8093e       snapshot-controller-7d9fbc56b8-49h86       kube-system
	88520ea2c4ca7       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      15 minutes ago       Running             volume-snapshot-controller               0                   4de0d58fcc8d5       snapshot-controller-7d9fbc56b8-bw7rc       kube-system
	9390fd50f454e       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              15 minutes ago       Running             yakd                                     0                   a77b4648943e2       yakd-dashboard-5ff678cb9-b48gd             yakd-dashboard
	ec242b99be750       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             15 minutes ago       Running             coredns                                  0                   5e1993cbe5e41       coredns-66bc5c9577-4kbq4                   kube-system
	165a582582a89       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             15 minutes ago       Running             storage-provisioner                      0                   8b4b5f8349762       storage-provisioner                        kube-system
	cde8e7a8a028e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             15 minutes ago       Running             kindnet-cni                              0                   b1a33925c911a       kindnet-p9zvn                              kube-system
	0703880dcf265       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             15 minutes ago       Running             kube-proxy                               0                   18175bde14b29       kube-proxy-gdxqs                           kube-system
	972d6e9616c37       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             16 minutes ago       Running             etcd                                     0                   789f38c5890c2       etcd-addons-693704                         kube-system
	020148eb47c8c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             16 minutes ago       Running             kube-scheduler                           0                   3aa090880fcae       kube-scheduler-addons-693704               kube-system
	ab99c3bb8f644       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             16 minutes ago       Running             kube-controller-manager                  0                   629d2cf069469       kube-controller-manager-addons-693704      kube-system
	71c9ea9528918       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             16 minutes ago       Running             kube-apiserver                           0                   de4f0abfefce3       kube-apiserver-addons-693704               kube-system
	
	
	==> coredns [ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b] <==
	[INFO] 10.244.0.17:55859 - 34053 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.006721575s
	[INFO] 10.244.0.17:55859 - 46822 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000305001s
	[INFO] 10.244.0.17:55859 - 21325 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000282717s
	[INFO] 10.244.0.17:37045 - 20421 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000162088s
	[INFO] 10.244.0.17:37045 - 20651 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000128325s
	[INFO] 10.244.0.17:51048 - 61194 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000092519s
	[INFO] 10.244.0.17:51048 - 61672 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085027s
	[INFO] 10.244.0.17:57091 - 44872 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088334s
	[INFO] 10.244.0.17:57091 - 44684 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105589s
	[INFO] 10.244.0.17:59527 - 40959 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003459669s
	[INFO] 10.244.0.17:59527 - 41156 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003770241s
	[INFO] 10.244.0.17:59136 - 21305 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000142257s
	[INFO] 10.244.0.17:59136 - 21125 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093717s
	[INFO] 10.244.0.21:41484 - 12317 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000192315s
	[INFO] 10.244.0.21:60775 - 50484 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000142913s
	[INFO] 10.244.0.21:49862 - 44888 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127521s
	[INFO] 10.244.0.21:54840 - 52239 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149642s
	[INFO] 10.244.0.21:42560 - 6869 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000156624s
	[INFO] 10.244.0.21:41861 - 43315 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000298545s
	[INFO] 10.244.0.21:38412 - 8398 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002294645s
	[INFO] 10.244.0.21:40087 - 34579 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002408201s
	[INFO] 10.244.0.21:50163 - 3512 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.006805026s
	[INFO] 10.244.0.21:42501 - 46640 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.006618816s
	[INFO] 10.244.0.23:46061 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000191659s
	[INFO] 10.244.0.23:58330 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000122318s
	
	
	==> describe nodes <==
	Name:               addons-693704
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-693704
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=addons-693704
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_19_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-693704
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-693704"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:19:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-693704
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:35:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:34:30 +0000   Thu, 02 Oct 2025 20:19:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:34:30 +0000   Thu, 02 Oct 2025 20:19:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:34:30 +0000   Thu, 02 Oct 2025 20:19:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:34:30 +0000   Thu, 02 Oct 2025 20:20:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-693704
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 db645666b7ad4f1695da9df78e9fa367
	  System UUID:                021278b1-6d13-4d8b-91c7-a5de147567f7
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     cloud-spanner-emulator-85f6b7fc65-5wsmw     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  gadget                      gadget-gljs2                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  gcp-auth                    gcp-auth-78565c9fb4-27djq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-9frwt    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-4kbq4                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 csi-hostpathplugin-kkptd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 etcd-addons-693704                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-p9zvn                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-addons-693704                250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-693704       200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-gdxqs                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-693704                100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-85b7d694d7-8pl6l             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         15m
	  kube-system                 nvidia-device-plugin-daemonset-jblz6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 registry-66898fdd98-8rftt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 registry-creds-764b6fb674-6cg6b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 registry-proxy-2kw45                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 snapshot-controller-7d9fbc56b8-49h86        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 snapshot-controller-7d9fbc56b8-bw7rc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  local-path-storage          local-path-provisioner-648f6765c9-v6khh     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  yakd-dashboard              yakd-dashboard-5ff678cb9-b48gd              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node addons-693704 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node addons-693704 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)  kubelet          Node addons-693704 status is now: NodeHasSufficientPID
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15m                kubelet          Node addons-693704 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m                kubelet          Node addons-693704 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m                kubelet          Node addons-693704 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                node-controller  Node addons-693704 event: Registered Node addons-693704 in Controller
	  Normal   NodeReady                15m                kubelet          Node addons-693704 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 19:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 20:17] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 20:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3] <==
	{"level":"warn","ts":"2025-10-02T20:19:28.886646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.904572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.925806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.935913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.956578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.971517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.993677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.031509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.041915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.068902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.157895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:45.092047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:45.118929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:06.895880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:06.909631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:07.000732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:07.017116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49460","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:20:36.364046Z","caller":"traceutil/trace.go:172","msg":"trace[1063042819] transaction","detail":"{read_only:false; response_revision:1036; number_of_response:1; }","duration":"113.56953ms","start":"2025-10-02T20:20:36.250465Z","end":"2025-10-02T20:20:36.364035Z","steps":["trace[1063042819] 'process raft request'  (duration: 56.881349ms)","trace[1063042819] 'compare'  (duration: 56.419938ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T20:20:36.365279Z","caller":"traceutil/trace.go:172","msg":"trace[29069078] transaction","detail":"{read_only:false; response_revision:1037; number_of_response:1; }","duration":"104.71736ms","start":"2025-10-02T20:20:36.259205Z","end":"2025-10-02T20:20:36.363922Z","steps":["trace[29069078] 'process raft request'  (duration: 104.653649ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T20:29:27.693775Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1635}
	{"level":"info","ts":"2025-10-02T20:29:27.721695Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1635,"took":"27.317564ms","hash":816137782,"current-db-size-bytes":5775360,"current-db-size":"5.8 MB","current-db-size-in-use-bytes":3670016,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2025-10-02T20:29:27.721750Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":816137782,"revision":1635,"compact-revision":-1}
	{"level":"info","ts":"2025-10-02T20:34:27.700660Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2133}
	{"level":"info","ts":"2025-10-02T20:34:27.720259Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2133,"took":"19.068987ms","hash":430795367,"current-db-size-bytes":5775360,"current-db-size":"5.8 MB","current-db-size-in-use-bytes":3129344,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2025-10-02T20:34:27.720313Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":430795367,"revision":2133,"compact-revision":1635}
	
	
	==> gcp-auth [40761b95b219669fa13be3f37e9874311bcd42514e92101fcec6f883bf46c837] <==
	2025/10/02 20:21:27 GCP Auth Webhook started!
	2025/10/02 20:21:34 Ready to marshal response ...
	2025/10/02 20:21:34 Ready to write response ...
	2025/10/02 20:21:34 Ready to marshal response ...
	2025/10/02 20:21:34 Ready to write response ...
	2025/10/02 20:21:34 Ready to marshal response ...
	2025/10/02 20:21:34 Ready to write response ...
	2025/10/02 20:21:55 Ready to marshal response ...
	2025/10/02 20:21:55 Ready to write response ...
	2025/10/02 20:21:59 Ready to marshal response ...
	2025/10/02 20:21:59 Ready to write response ...
	2025/10/02 20:22:06 Ready to marshal response ...
	2025/10/02 20:22:06 Ready to write response ...
	2025/10/02 20:22:06 Ready to marshal response ...
	2025/10/02 20:22:06 Ready to write response ...
	2025/10/02 20:24:21 Ready to marshal response ...
	2025/10/02 20:24:21 Ready to write response ...
	2025/10/02 20:26:52 Ready to marshal response ...
	2025/10/02 20:26:52 Ready to write response ...
	2025/10/02 20:27:29 Ready to marshal response ...
	2025/10/02 20:27:29 Ready to write response ...
	2025/10/02 20:28:41 Ready to marshal response ...
	2025/10/02 20:28:41 Ready to write response ...
	2025/10/02 20:32:42 Ready to marshal response ...
	2025/10/02 20:32:42 Ready to write response ...
	
	
	==> kernel <==
	 20:35:31 up  5:17,  0 user,  load average: 0.12, 0.65, 1.75
	Linux addons-693704 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0] <==
	I1002 20:33:28.907693       1 main.go:301] handling current node
	I1002 20:33:38.914594       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:33:38.914725       1 main.go:301] handling current node
	I1002 20:33:48.912064       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:33:48.912099       1 main.go:301] handling current node
	I1002 20:33:58.910120       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:33:58.910155       1 main.go:301] handling current node
	I1002 20:34:08.910139       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:34:08.910176       1 main.go:301] handling current node
	I1002 20:34:18.908647       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:34:18.908680       1 main.go:301] handling current node
	I1002 20:34:28.908659       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:34:28.908706       1 main.go:301] handling current node
	I1002 20:34:38.911034       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:34:38.911135       1 main.go:301] handling current node
	I1002 20:34:48.911065       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:34:48.911197       1 main.go:301] handling current node
	I1002 20:34:58.912842       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:34:58.912881       1 main.go:301] handling current node
	I1002 20:35:08.915330       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:35:08.915366       1 main.go:301] handling current node
	I1002 20:35:18.913187       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:35:18.913230       1 main.go:301] handling current node
	I1002 20:35:28.914274       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:35:28.914391       1 main.go:301] handling current node
	
	
	==> kube-apiserver [71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1002 20:21:08.431339       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.13.34:443: connect: connection refused" logger="UnhandledError"
	E1002 20:21:08.433865       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.13.34:443: connect: connection refused" logger="UnhandledError"
	W1002 20:21:09.431415       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:21:09.431457       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 20:21:09.431472       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 20:21:09.431507       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:21:09.431564       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 20:21:09.432661       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 20:21:13.450452       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:21:13.450503       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1002 20:21:13.450794       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1002 20:21:13.499856       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1002 20:21:44.668705       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43290: use of closed network connection
	I1002 20:27:29.421426       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 20:27:29.744566       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.50.219"}
	I1002 20:29:29.938727       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c] <==
	I1002 20:19:36.927821       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 20:19:36.927907       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-693704"
	I1002 20:19:36.927948       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 20:19:36.927971       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 20:19:36.929043       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 20:19:36.929089       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 20:19:36.929104       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 20:19:36.929196       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 20:19:36.929242       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 20:19:36.930939       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 20:19:36.953633       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 20:19:36.957922       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 20:19:42.958900       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 20:20:06.887630       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 20:20:06.887888       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1002 20:20:06.887954       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 20:20:06.966287       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 20:20:06.978573       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 20:20:06.989795       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:20:07.080038       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:20:21.939957       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1002 20:20:36.994429       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 20:20:37.091221       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1002 20:21:07.000284       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 20:21:07.098427       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1] <==
	I1002 20:19:38.989384       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:19:39.087738       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:19:39.188580       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:19:39.188619       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:19:39.188702       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:19:39.263259       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:19:39.267990       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:19:39.278942       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:19:39.279269       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:19:39.279289       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:19:39.289355       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:19:39.289374       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:19:39.289655       1 config.go:200] "Starting service config controller"
	I1002 20:19:39.289662       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:19:39.289995       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:19:39.290002       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:19:39.290636       1 config.go:309] "Starting node config controller"
	I1002 20:19:39.290645       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:19:39.290651       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:19:39.390091       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 20:19:39.390138       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:19:39.390179       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251] <==
	E1002 20:19:30.083075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:19:30.083123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 20:19:30.083172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 20:19:30.083221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 20:19:30.083269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:19:30.083318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:19:30.083367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:19:30.083415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:19:30.083460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:19:30.083513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 20:19:30.083555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 20:19:30.083651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 20:19:30.083692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 20:19:30.083739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:19:30.086243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 20:19:30.905348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 20:19:30.932288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:19:30.964617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 20:19:30.984039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 20:19:31.017892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:19:31.036527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:19:31.063255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1002 20:19:31.603691       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1002 20:32:06.229218       1 framework.go:1298] "Plugin failed" err="binding volumes: context deadline exceeded" plugin="VolumeBinding" pod="default/test-local-path" node="addons-693704"
	E1002 20:32:06.229485       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running PreBind plugin \"VolumeBinding\": binding volumes: context deadline exceeded" logger="UnhandledError" pod="default/test-local-path"
	
	
	==> kubelet <==
	Oct 02 20:34:50 addons-693704 kubelet[1282]: I1002 20:34:50.644825    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/23d0112c-1500-48bc-88ee-772123edd79c-script\") pod \"23d0112c-1500-48bc-88ee-772123edd79c\" (UID: \"23d0112c-1500-48bc-88ee-772123edd79c\") "
	Oct 02 20:34:50 addons-693704 kubelet[1282]: I1002 20:34:50.644884    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/23d0112c-1500-48bc-88ee-772123edd79c-data\") pod \"23d0112c-1500-48bc-88ee-772123edd79c\" (UID: \"23d0112c-1500-48bc-88ee-772123edd79c\") "
	Oct 02 20:34:50 addons-693704 kubelet[1282]: I1002 20:34:50.644912    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d9gp\" (UniqueName: \"kubernetes.io/projected/23d0112c-1500-48bc-88ee-772123edd79c-kube-api-access-2d9gp\") pod \"23d0112c-1500-48bc-88ee-772123edd79c\" (UID: \"23d0112c-1500-48bc-88ee-772123edd79c\") "
	Oct 02 20:34:50 addons-693704 kubelet[1282]: I1002 20:34:50.645190    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d0112c-1500-48bc-88ee-772123edd79c-data" (OuterVolumeSpecName: "data") pod "23d0112c-1500-48bc-88ee-772123edd79c" (UID: "23d0112c-1500-48bc-88ee-772123edd79c"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 02 20:34:50 addons-693704 kubelet[1282]: I1002 20:34:50.645506    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23d0112c-1500-48bc-88ee-772123edd79c-script" (OuterVolumeSpecName: "script") pod "23d0112c-1500-48bc-88ee-772123edd79c" (UID: "23d0112c-1500-48bc-88ee-772123edd79c"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 02 20:34:50 addons-693704 kubelet[1282]: I1002 20:34:50.645640    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d0112c-1500-48bc-88ee-772123edd79c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "23d0112c-1500-48bc-88ee-772123edd79c" (UID: "23d0112c-1500-48bc-88ee-772123edd79c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 02 20:34:50 addons-693704 kubelet[1282]: I1002 20:34:50.649439    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23d0112c-1500-48bc-88ee-772123edd79c-kube-api-access-2d9gp" (OuterVolumeSpecName: "kube-api-access-2d9gp") pod "23d0112c-1500-48bc-88ee-772123edd79c" (UID: "23d0112c-1500-48bc-88ee-772123edd79c"). InnerVolumeSpecName "kube-api-access-2d9gp". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 02 20:34:50 addons-693704 kubelet[1282]: I1002 20:34:50.746733    1282 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/23d0112c-1500-48bc-88ee-772123edd79c-gcp-creds\") on node \"addons-693704\" DevicePath \"\""
	Oct 02 20:34:50 addons-693704 kubelet[1282]: I1002 20:34:50.747037    1282 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/23d0112c-1500-48bc-88ee-772123edd79c-script\") on node \"addons-693704\" DevicePath \"\""
	Oct 02 20:34:50 addons-693704 kubelet[1282]: I1002 20:34:50.747173    1282 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/23d0112c-1500-48bc-88ee-772123edd79c-data\") on node \"addons-693704\" DevicePath \"\""
	Oct 02 20:34:50 addons-693704 kubelet[1282]: I1002 20:34:50.747274    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2d9gp\" (UniqueName: \"kubernetes.io/projected/23d0112c-1500-48bc-88ee-772123edd79c-kube-api-access-2d9gp\") on node \"addons-693704\" DevicePath \"\""
	Oct 02 20:34:52 addons-693704 kubelet[1282]: I1002 20:34:52.752227    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23d0112c-1500-48bc-88ee-772123edd79c" path="/var/lib/kubelet/pods/23d0112c-1500-48bc-88ee-772123edd79c/volumes"
	Oct 02 20:34:57 addons-693704 kubelet[1282]: I1002 20:34:57.747620    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-8rftt" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:34:59 addons-693704 kubelet[1282]: E1002 20:34:59.747689    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0a97c0d4-0277-4225-81aa-39349ced9b52"
	Oct 02 20:35:02 addons-693704 kubelet[1282]: I1002 20:35:02.746965    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6cg6b" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:35:02 addons-693704 kubelet[1282]: I1002 20:35:02.747491    1282 scope.go:117] "RemoveContainer" containerID="d67f8f41794c8f8604b97bc25a552be0c4f2e4321639562194b4e10f9bc9b24d"
	Oct 02 20:35:02 addons-693704 kubelet[1282]: E1002 20:35:02.747764    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-6cg6b_kube-system(d16ac5e8-a382-4faa-85dc-039ac18fa4cf)\"" pod="kube-system/registry-creds-764b6fb674-6cg6b" podUID="d16ac5e8-a382-4faa-85dc-039ac18fa4cf"
	Oct 02 20:35:12 addons-693704 kubelet[1282]: E1002 20:35:12.750009    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0a97c0d4-0277-4225-81aa-39349ced9b52"
	Oct 02 20:35:15 addons-693704 kubelet[1282]: I1002 20:35:15.747547    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6cg6b" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:35:15 addons-693704 kubelet[1282]: I1002 20:35:15.748054    1282 scope.go:117] "RemoveContainer" containerID="d67f8f41794c8f8604b97bc25a552be0c4f2e4321639562194b4e10f9bc9b24d"
	Oct 02 20:35:15 addons-693704 kubelet[1282]: E1002 20:35:15.748277    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-6cg6b_kube-system(d16ac5e8-a382-4faa-85dc-039ac18fa4cf)\"" pod="kube-system/registry-creds-764b6fb674-6cg6b" podUID="d16ac5e8-a382-4faa-85dc-039ac18fa4cf"
	Oct 02 20:35:24 addons-693704 kubelet[1282]: E1002 20:35:24.747736    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0a97c0d4-0277-4225-81aa-39349ced9b52"
	Oct 02 20:35:28 addons-693704 kubelet[1282]: I1002 20:35:28.746929    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6cg6b" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:35:28 addons-693704 kubelet[1282]: I1002 20:35:28.749825    1282 scope.go:117] "RemoveContainer" containerID="d67f8f41794c8f8604b97bc25a552be0c4f2e4321639562194b4e10f9bc9b24d"
	Oct 02 20:35:28 addons-693704 kubelet[1282]: E1002 20:35:28.750350    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-6cg6b_kube-system(d16ac5e8-a382-4faa-85dc-039ac18fa4cf)\"" pod="kube-system/registry-creds-764b6fb674-6cg6b" podUID="d16ac5e8-a382-4faa-85dc-039ac18fa4cf"
	
	
	==> storage-provisioner [165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa] <==
	W1002 20:35:07.035449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:09.038152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:09.042697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:11.045782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:11.052776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:13.057783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:13.063371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:15.067168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:15.071986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:17.075162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:17.079516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:19.082677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:19.089510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:21.092544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:21.097011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:23.100608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:23.104751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:25.108658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:25.133527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:27.157349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:27.163940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:29.167486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:29.174210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:31.178797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:35:31.185842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-693704 -n addons-693704
helpers_test.go:269: (dbg) Run:  kubectl --context addons-693704 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-693704 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-693704 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn: exit status 1 (135.371126ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-693704/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:27:29 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rqkzj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rqkzj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m3s                   default-scheduler  Successfully assigned default/nginx to addons-693704
	  Warning  Failed     3m15s (x2 over 6m53s)  kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m15s (x2 over 6m53s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    3m1s (x2 over 6m52s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m1s (x2 over 6m52s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m49s (x3 over 8m2s)   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-693704/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:21:59 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-78xtg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-78xtg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  13m                  default-scheduler  Successfully assigned default/task-pv-pod to addons-693704
	  Warning  Failed     12m                  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m31s (x5 over 13m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     103s (x5 over 12m)   kubelet            Error: ErrImagePull
	  Warning  Failed     103s (x4 over 10m)   kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    8s (x16 over 12m)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     8s (x16 over 12m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t66j5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-t66j5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age    From               Message
	  ----     ------            ----   ----               -------
	  Warning  FailedScheduling  3m26s  default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: context deadline exceeded

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fndzf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-v6xpn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-693704 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-693704 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (297.126508ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:35:32.931002 1008039 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:35:32.931688 1008039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:35:32.931705 1008039 out.go:374] Setting ErrFile to fd 2...
	I1002 20:35:32.931712 1008039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:35:32.932019 1008039 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:35:32.932364 1008039 mustload.go:65] Loading cluster: addons-693704
	I1002 20:35:32.932764 1008039 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:32.932788 1008039 addons.go:606] checking whether the cluster is paused
	I1002 20:35:32.932927 1008039 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:32.932964 1008039 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:35:32.933580 1008039 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:35:32.968265 1008039 ssh_runner.go:195] Run: systemctl --version
	I1002 20:35:32.968317 1008039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:35:32.997733 1008039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:35:33.100557 1008039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:35:33.100643 1008039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:35:33.131568 1008039 cri.go:89] found id: "d67f8f41794c8f8604b97bc25a552be0c4f2e4321639562194b4e10f9bc9b24d"
	I1002 20:35:33.131667 1008039 cri.go:89] found id: "6928dd54cd320350a0574bce9e99e262d52ef2ad32927799eb1e85f4f8e93f52"
	I1002 20:35:33.131689 1008039 cri.go:89] found id: "8860f0e019516ca00a020669c8250eb699147392f2fd0ab332eef75f31694ae5"
	I1002 20:35:33.131709 1008039 cri.go:89] found id: "36c49020464e244780926156984b457b4f46db8eacc0cd8fc4a9c835da280358"
	I1002 20:35:33.131739 1008039 cri.go:89] found id: "b7161126faae3543c45089c1c2a743e531e0d2ec905126b94b00abc8b3f4e86b"
	I1002 20:35:33.131759 1008039 cri.go:89] found id: "ee97eb0b32c7fe49fe1575e1d85aaabcb7c2a4c12bc8d6e96ec8ec1ace15eb83"
	I1002 20:35:33.131780 1008039 cri.go:89] found id: "fc0714b2fd72f680a79055bd40db196753ebd675e00b57fdbbb1311e6f52f0c6"
	I1002 20:35:33.131799 1008039 cri.go:89] found id: "16f4af5cddb751e7f70cd54a1b075a0e52070ec2ab609d7dd4b6382625d77f3c"
	I1002 20:35:33.131839 1008039 cri.go:89] found id: "91fa943497ee5a780fabd8bb6f6a4c48256ff45ea9d2abbfd6fe7173c099a981"
	I1002 20:35:33.131868 1008039 cri.go:89] found id: "439510daf689e871d24092dcfc8fc87def85821aa672fe72a12cd5370bc3106f"
	I1002 20:35:33.131891 1008039 cri.go:89] found id: "063fa56393267278e05822086a20d260f8a6eab5ccf9daa3feabae6e691670f2"
	I1002 20:35:33.131923 1008039 cri.go:89] found id: "948a7498f368d7c9cddb6833384a3cb174bb78753aa09668ec42b4d2860ebf4e"
	I1002 20:35:33.131943 1008039 cri.go:89] found id: "bbd0c0fdbe9489f01321a2881ee0a43621fde9603022a8e6e320b98adde72078"
	I1002 20:35:33.131964 1008039 cri.go:89] found id: "4a5b5d50e1426c0038720273cc7b8abbf6622876cd1ba10fda5cf36d7b50e6ab"
	I1002 20:35:33.131996 1008039 cri.go:89] found id: "4757a91ace2d4f7cb4fd077cfa0e537a6b0a1a55cfca1ddd7afa9cdee5de2849"
	I1002 20:35:33.132030 1008039 cri.go:89] found id: "88520ea2c4ca73535bd06329fc29000af61986adf6366f19c9a796b83180d153"
	I1002 20:35:33.132075 1008039 cri.go:89] found id: "ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b"
	I1002 20:35:33.132095 1008039 cri.go:89] found id: "165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa"
	I1002 20:35:33.132120 1008039 cri.go:89] found id: "cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0"
	I1002 20:35:33.132138 1008039 cri.go:89] found id: "0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1"
	I1002 20:35:33.132169 1008039 cri.go:89] found id: "972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3"
	I1002 20:35:33.132187 1008039 cri.go:89] found id: "020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251"
	I1002 20:35:33.132207 1008039 cri.go:89] found id: "ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c"
	I1002 20:35:33.132230 1008039 cri.go:89] found id: "71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba"
	I1002 20:35:33.132257 1008039 cri.go:89] found id: ""
	I1002 20:35:33.132346 1008039 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 20:35:33.149055 1008039 out.go:203] 
	W1002 20:35:33.152103 1008039 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:35:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:35:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 20:35:33.152127 1008039 out.go:285] * 
	* 
	W1002 20:35:33.160084 1008039 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:35:33.163198 1008039 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-693704 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-693704 addons disable ingress --alsologtostderr -v=1: exit status 11 (263.655094ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:35:33.229544 1008088 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:35:33.230356 1008088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:35:33.230366 1008088 out.go:374] Setting ErrFile to fd 2...
	I1002 20:35:33.230371 1008088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:35:33.230633 1008088 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:35:33.231036 1008088 mustload.go:65] Loading cluster: addons-693704
	I1002 20:35:33.231481 1008088 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:33.231507 1008088 addons.go:606] checking whether the cluster is paused
	I1002 20:35:33.231664 1008088 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:33.231708 1008088 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:35:33.232196 1008088 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:35:33.250509 1008088 ssh_runner.go:195] Run: systemctl --version
	I1002 20:35:33.250574 1008088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:35:33.267638 1008088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:35:33.364602 1008088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:35:33.364701 1008088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:35:33.399926 1008088 cri.go:89] found id: "d67f8f41794c8f8604b97bc25a552be0c4f2e4321639562194b4e10f9bc9b24d"
	I1002 20:35:33.399949 1008088 cri.go:89] found id: "6928dd54cd320350a0574bce9e99e262d52ef2ad32927799eb1e85f4f8e93f52"
	I1002 20:35:33.399964 1008088 cri.go:89] found id: "8860f0e019516ca00a020669c8250eb699147392f2fd0ab332eef75f31694ae5"
	I1002 20:35:33.399968 1008088 cri.go:89] found id: "36c49020464e244780926156984b457b4f46db8eacc0cd8fc4a9c835da280358"
	I1002 20:35:33.399972 1008088 cri.go:89] found id: "b7161126faae3543c45089c1c2a743e531e0d2ec905126b94b00abc8b3f4e86b"
	I1002 20:35:33.399976 1008088 cri.go:89] found id: "ee97eb0b32c7fe49fe1575e1d85aaabcb7c2a4c12bc8d6e96ec8ec1ace15eb83"
	I1002 20:35:33.399979 1008088 cri.go:89] found id: "fc0714b2fd72f680a79055bd40db196753ebd675e00b57fdbbb1311e6f52f0c6"
	I1002 20:35:33.399983 1008088 cri.go:89] found id: "16f4af5cddb751e7f70cd54a1b075a0e52070ec2ab609d7dd4b6382625d77f3c"
	I1002 20:35:33.399986 1008088 cri.go:89] found id: "91fa943497ee5a780fabd8bb6f6a4c48256ff45ea9d2abbfd6fe7173c099a981"
	I1002 20:35:33.399993 1008088 cri.go:89] found id: "439510daf689e871d24092dcfc8fc87def85821aa672fe72a12cd5370bc3106f"
	I1002 20:35:33.399996 1008088 cri.go:89] found id: "063fa56393267278e05822086a20d260f8a6eab5ccf9daa3feabae6e691670f2"
	I1002 20:35:33.400000 1008088 cri.go:89] found id: "948a7498f368d7c9cddb6833384a3cb174bb78753aa09668ec42b4d2860ebf4e"
	I1002 20:35:33.400003 1008088 cri.go:89] found id: "bbd0c0fdbe9489f01321a2881ee0a43621fde9603022a8e6e320b98adde72078"
	I1002 20:35:33.400007 1008088 cri.go:89] found id: "4a5b5d50e1426c0038720273cc7b8abbf6622876cd1ba10fda5cf36d7b50e6ab"
	I1002 20:35:33.400010 1008088 cri.go:89] found id: "4757a91ace2d4f7cb4fd077cfa0e537a6b0a1a55cfca1ddd7afa9cdee5de2849"
	I1002 20:35:33.400016 1008088 cri.go:89] found id: "88520ea2c4ca73535bd06329fc29000af61986adf6366f19c9a796b83180d153"
	I1002 20:35:33.400023 1008088 cri.go:89] found id: "ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b"
	I1002 20:35:33.400026 1008088 cri.go:89] found id: "165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa"
	I1002 20:35:33.400030 1008088 cri.go:89] found id: "cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0"
	I1002 20:35:33.400032 1008088 cri.go:89] found id: "0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1"
	I1002 20:35:33.400037 1008088 cri.go:89] found id: "972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3"
	I1002 20:35:33.400040 1008088 cri.go:89] found id: "020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251"
	I1002 20:35:33.400043 1008088 cri.go:89] found id: "ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c"
	I1002 20:35:33.400045 1008088 cri.go:89] found id: "71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba"
	I1002 20:35:33.400048 1008088 cri.go:89] found id: ""
	I1002 20:35:33.400100 1008088 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 20:35:33.414815 1008088 out.go:203] 
	W1002 20:35:33.417749 1008088 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:35:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:35:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 20:35:33.417773 1008088 out.go:285] * 
	* 
	W1002 20:35:33.425509 1008088 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:35:33.428494 1008088 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-693704 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (484.36s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-gljs2" [d5399bfd-29cf-4534-863a-fc14b41214e7] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003744347s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-693704 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (279.864096ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:27:23.499035 1004714 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:27:23.499861 1004714 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:23.499881 1004714 out.go:374] Setting ErrFile to fd 2...
	I1002 20:27:23.499888 1004714 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:23.500235 1004714 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:27:23.500608 1004714 mustload.go:65] Loading cluster: addons-693704
	I1002 20:27:23.501084 1004714 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:27:23.501107 1004714 addons.go:606] checking whether the cluster is paused
	I1002 20:27:23.501260 1004714 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:27:23.501298 1004714 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:27:23.501891 1004714 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:27:23.521008 1004714 ssh_runner.go:195] Run: systemctl --version
	I1002 20:27:23.521076 1004714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:27:23.559282 1004714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:27:23.656851 1004714 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:27:23.656949 1004714 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:27:23.687374 1004714 cri.go:89] found id: "6928dd54cd320350a0574bce9e99e262d52ef2ad32927799eb1e85f4f8e93f52"
	I1002 20:27:23.687408 1004714 cri.go:89] found id: "8860f0e019516ca00a020669c8250eb699147392f2fd0ab332eef75f31694ae5"
	I1002 20:27:23.687414 1004714 cri.go:89] found id: "36c49020464e244780926156984b457b4f46db8eacc0cd8fc4a9c835da280358"
	I1002 20:27:23.687418 1004714 cri.go:89] found id: "b7161126faae3543c45089c1c2a743e531e0d2ec905126b94b00abc8b3f4e86b"
	I1002 20:27:23.687421 1004714 cri.go:89] found id: "ee97eb0b32c7fe49fe1575e1d85aaabcb7c2a4c12bc8d6e96ec8ec1ace15eb83"
	I1002 20:27:23.687425 1004714 cri.go:89] found id: "fc0714b2fd72f680a79055bd40db196753ebd675e00b57fdbbb1311e6f52f0c6"
	I1002 20:27:23.687428 1004714 cri.go:89] found id: "16f4af5cddb751e7f70cd54a1b075a0e52070ec2ab609d7dd4b6382625d77f3c"
	I1002 20:27:23.687452 1004714 cri.go:89] found id: "91fa943497ee5a780fabd8bb6f6a4c48256ff45ea9d2abbfd6fe7173c099a981"
	I1002 20:27:23.687462 1004714 cri.go:89] found id: "439510daf689e871d24092dcfc8fc87def85821aa672fe72a12cd5370bc3106f"
	I1002 20:27:23.687469 1004714 cri.go:89] found id: "063fa56393267278e05822086a20d260f8a6eab5ccf9daa3feabae6e691670f2"
	I1002 20:27:23.687472 1004714 cri.go:89] found id: "948a7498f368d7c9cddb6833384a3cb174bb78753aa09668ec42b4d2860ebf4e"
	I1002 20:27:23.687476 1004714 cri.go:89] found id: "bbd0c0fdbe9489f01321a2881ee0a43621fde9603022a8e6e320b98adde72078"
	I1002 20:27:23.687480 1004714 cri.go:89] found id: "4a5b5d50e1426c0038720273cc7b8abbf6622876cd1ba10fda5cf36d7b50e6ab"
	I1002 20:27:23.687488 1004714 cri.go:89] found id: "4757a91ace2d4f7cb4fd077cfa0e537a6b0a1a55cfca1ddd7afa9cdee5de2849"
	I1002 20:27:23.687492 1004714 cri.go:89] found id: "88520ea2c4ca73535bd06329fc29000af61986adf6366f19c9a796b83180d153"
	I1002 20:27:23.687502 1004714 cri.go:89] found id: "ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b"
	I1002 20:27:23.687510 1004714 cri.go:89] found id: "165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa"
	I1002 20:27:23.687526 1004714 cri.go:89] found id: "cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0"
	I1002 20:27:23.687531 1004714 cri.go:89] found id: "0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1"
	I1002 20:27:23.687534 1004714 cri.go:89] found id: "972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3"
	I1002 20:27:23.687540 1004714 cri.go:89] found id: "020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251"
	I1002 20:27:23.687549 1004714 cri.go:89] found id: "ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c"
	I1002 20:27:23.687552 1004714 cri.go:89] found id: "71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba"
	I1002 20:27:23.687555 1004714 cri.go:89] found id: ""
	I1002 20:27:23.687623 1004714 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 20:27:23.702722 1004714 out.go:203] 
	W1002 20:27:23.706259 1004714 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:27:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:27:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 20:27:23.706286 1004714 out.go:285] * 
	* 
	W1002 20:27:23.714111 1004714 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:27:23.717735 1004714 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-693704 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.264233ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003233138s
addons_test.go:463: (dbg) Run:  kubectl --context addons-693704 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-693704 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (250.856535ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:27:28.874644 1004787 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:27:28.875499 1004787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:28.875549 1004787 out.go:374] Setting ErrFile to fd 2...
	I1002 20:27:28.875570 1004787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:28.875881 1004787 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:27:28.878260 1004787 mustload.go:65] Loading cluster: addons-693704
	I1002 20:27:28.878792 1004787 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:27:28.878830 1004787 addons.go:606] checking whether the cluster is paused
	I1002 20:27:28.878962 1004787 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:27:28.879000 1004787 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:27:28.879492 1004787 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:27:28.898683 1004787 ssh_runner.go:195] Run: systemctl --version
	I1002 20:27:28.898750 1004787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:27:28.916454 1004787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:27:29.012803 1004787 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:27:29.012884 1004787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:27:29.041731 1004787 cri.go:89] found id: "6928dd54cd320350a0574bce9e99e262d52ef2ad32927799eb1e85f4f8e93f52"
	I1002 20:27:29.041800 1004787 cri.go:89] found id: "8860f0e019516ca00a020669c8250eb699147392f2fd0ab332eef75f31694ae5"
	I1002 20:27:29.041820 1004787 cri.go:89] found id: "36c49020464e244780926156984b457b4f46db8eacc0cd8fc4a9c835da280358"
	I1002 20:27:29.041838 1004787 cri.go:89] found id: "b7161126faae3543c45089c1c2a743e531e0d2ec905126b94b00abc8b3f4e86b"
	I1002 20:27:29.041858 1004787 cri.go:89] found id: "ee97eb0b32c7fe49fe1575e1d85aaabcb7c2a4c12bc8d6e96ec8ec1ace15eb83"
	I1002 20:27:29.041890 1004787 cri.go:89] found id: "fc0714b2fd72f680a79055bd40db196753ebd675e00b57fdbbb1311e6f52f0c6"
	I1002 20:27:29.041909 1004787 cri.go:89] found id: "16f4af5cddb751e7f70cd54a1b075a0e52070ec2ab609d7dd4b6382625d77f3c"
	I1002 20:27:29.041928 1004787 cri.go:89] found id: "91fa943497ee5a780fabd8bb6f6a4c48256ff45ea9d2abbfd6fe7173c099a981"
	I1002 20:27:29.041950 1004787 cri.go:89] found id: "439510daf689e871d24092dcfc8fc87def85821aa672fe72a12cd5370bc3106f"
	I1002 20:27:29.041980 1004787 cri.go:89] found id: "063fa56393267278e05822086a20d260f8a6eab5ccf9daa3feabae6e691670f2"
	I1002 20:27:29.042005 1004787 cri.go:89] found id: "948a7498f368d7c9cddb6833384a3cb174bb78753aa09668ec42b4d2860ebf4e"
	I1002 20:27:29.042025 1004787 cri.go:89] found id: "bbd0c0fdbe9489f01321a2881ee0a43621fde9603022a8e6e320b98adde72078"
	I1002 20:27:29.042098 1004787 cri.go:89] found id: "4a5b5d50e1426c0038720273cc7b8abbf6622876cd1ba10fda5cf36d7b50e6ab"
	I1002 20:27:29.042110 1004787 cri.go:89] found id: "4757a91ace2d4f7cb4fd077cfa0e537a6b0a1a55cfca1ddd7afa9cdee5de2849"
	I1002 20:27:29.042114 1004787 cri.go:89] found id: "88520ea2c4ca73535bd06329fc29000af61986adf6366f19c9a796b83180d153"
	I1002 20:27:29.042119 1004787 cri.go:89] found id: "ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b"
	I1002 20:27:29.042122 1004787 cri.go:89] found id: "165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa"
	I1002 20:27:29.042126 1004787 cri.go:89] found id: "cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0"
	I1002 20:27:29.042129 1004787 cri.go:89] found id: "0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1"
	I1002 20:27:29.042132 1004787 cri.go:89] found id: "972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3"
	I1002 20:27:29.042140 1004787 cri.go:89] found id: "020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251"
	I1002 20:27:29.042144 1004787 cri.go:89] found id: "ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c"
	I1002 20:27:29.042147 1004787 cri.go:89] found id: "71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba"
	I1002 20:27:29.042150 1004787 cri.go:89] found id: ""
	I1002 20:27:29.042213 1004787 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 20:27:29.056808 1004787 out.go:203] 
	W1002 20:27:29.059610 1004787 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:27:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:27:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 20:27:29.059636 1004787 out.go:285] * 
	* 
	W1002 20:27:29.067436 1004787 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:27:29.070443 1004787 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-693704 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (371.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1002 20:21:51.708599  993954 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1002 20:21:51.714003  993954 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1002 20:21:51.714109  993954 kapi.go:107] duration metric: took 5.522279ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.536439ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-693704 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-693704 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [0a97c0d4-0277-4225-81aa-39349ced9b52] Pending
helpers_test.go:352: "task-pv-pod" [0a97c0d4-0277-4225-81aa-39349ced9b52] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-693704 -n addons-693704
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-10-02 20:27:59.472555027 +0000 UTC m=+595.572491516
addons_test.go:567: (dbg) Run:  kubectl --context addons-693704 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-693704 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-693704/192.168.49.2
Start Time:       Thu, 02 Oct 2025 20:21:59 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.24
IPs:
IP:  10.244.0.24
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-78xtg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-78xtg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/task-pv-pod to addons-693704
Warning  Failed     4m57s                kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     53s (x3 over 4m57s)  kubelet            Error: ErrImagePull
Warning  Failed     53s (x2 over 2m56s)  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    25s (x4 over 4m57s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     25s (x4 over 4m57s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    11s (x4 over 6m)     kubelet            Pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-693704 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-693704 logs task-pv-pod -n default: exit status 1 (98.970278ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-693704 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-693704
helpers_test.go:243: (dbg) docker inspect addons-693704:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277",
	        "Created": "2025-10-02T20:19:07.144298893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 995109,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:19:07.216699876Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/hostname",
	        "HostsPath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/hosts",
	        "LogPath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277-json.log",
	        "Name": "/addons-693704",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-693704:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-693704",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277",
	                "LowerDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997/merged",
	                "UpperDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997/diff",
	                "WorkDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-693704",
	                "Source": "/var/lib/docker/volumes/addons-693704/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-693704",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-693704",
	                "name.minikube.sigs.k8s.io": "addons-693704",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab8175306a77dcd2868d77b0652aff78896362c7258aefc47fe7a07059e18c86",
	            "SandboxKey": "/var/run/docker/netns/ab8175306a77",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-693704": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:98:f0:2f:5f:be",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b2b7a73ec267c22f9c2a0b05d90a02bfb26f74cfccf22ef9af628da6d1b040f0",
	                    "EndpointID": "a29bf68bc8126d88282105e99c5ad7822f95d3abd8c683fc3272ac8e0ad9c3f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-693704",
	                        "d39c48e99245"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-693704 -n addons-693704
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-693704 logs -n 25: (1.707780876s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-926391                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-926391   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ start   │ -o=json --download-only -p download-only-569491 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-569491   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ delete  │ -p download-only-569491                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-569491   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ delete  │ -p download-only-926391                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-926391   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ delete  │ -p download-only-569491                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-569491   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ start   │ --download-only -p download-docker-496636 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-496636 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ -p download-docker-496636                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-496636 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ start   │ --download-only -p binary-mirror-261948 --alsologtostderr --binary-mirror http://127.0.0.1:38235 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-261948   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ -p binary-mirror-261948                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-261948   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ addons  │ disable dashboard -p addons-693704                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ addons  │ enable dashboard -p addons-693704                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ start   │ -p addons-693704 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:21 UTC │
	│ addons  │ addons-693704 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │                     │
	│ addons  │ addons-693704 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │                     │
	│ addons  │ addons-693704 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │                     │
	│ ip      │ addons-693704 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │ 02 Oct 25 20:21 UTC │
	│ addons  │ addons-693704 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ addons  │ addons-693704 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ addons  │ addons-693704 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ addons  │ addons-693704 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ addons  │ enable headlamp -p addons-693704 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ addons  │ addons-693704 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ addons  │ addons-693704 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ addons  │ addons-693704 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:18:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:18:42.587429  994709 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:18:42.587660  994709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:42.587694  994709 out.go:374] Setting ErrFile to fd 2...
	I1002 20:18:42.587713  994709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:42.588005  994709 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:18:42.588496  994709 out.go:368] Setting JSON to false
	I1002 20:18:42.589377  994709 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18060,"bootTime":1759418263,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 20:18:42.589480  994709 start.go:140] virtualization:  
	I1002 20:18:42.592863  994709 out.go:179] * [addons-693704] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:18:42.596651  994709 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:18:42.596802  994709 notify.go:221] Checking for updates...
	I1002 20:18:42.602490  994709 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:18:42.605403  994709 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:18:42.608387  994709 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 20:18:42.611210  994709 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:18:42.614017  994709 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:18:42.617196  994709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:18:42.641430  994709 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:18:42.641548  994709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:42.702297  994709 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-02 20:18:42.693145863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:18:42.702404  994709 docker.go:319] overlay module found
	I1002 20:18:42.705389  994709 out.go:179] * Using the docker driver based on user configuration
	I1002 20:18:42.708231  994709 start.go:306] selected driver: docker
	I1002 20:18:42.708247  994709 start.go:936] validating driver "docker" against <nil>
	I1002 20:18:42.708259  994709 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:18:42.708953  994709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:42.762696  994709 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-02 20:18:42.753788413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:18:42.762850  994709 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:18:42.763087  994709 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:18:42.766074  994709 out.go:179] * Using Docker driver with root privileges
	I1002 20:18:42.768763  994709 cni.go:84] Creating CNI manager for ""
	I1002 20:18:42.768836  994709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:18:42.768849  994709 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:18:42.768919  994709 start.go:350] cluster config:
	{Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1002 20:18:42.771909  994709 out.go:179] * Starting "addons-693704" primary control-plane node in "addons-693704" cluster
	I1002 20:18:42.774712  994709 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:18:42.777590  994709 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:18:42.780428  994709 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:42.780455  994709 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:18:42.780491  994709 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 20:18:42.780500  994709 cache.go:59] Caching tarball of preloaded images
	I1002 20:18:42.780575  994709 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 20:18:42.780584  994709 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:18:42.780914  994709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/config.json ...
	I1002 20:18:42.780943  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/config.json: {Name:mkd60ee77440eccb122eacb378637e77c2fde5a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:18:42.795665  994709 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:18:42.795798  994709 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 20:18:42.795824  994709 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 20:18:42.795836  994709 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 20:18:42.795846  994709 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 20:18:42.795852  994709 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 20:19:00.985065  994709 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 20:19:00.985108  994709 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:19:00.985137  994709 start.go:361] acquireMachinesLock for addons-693704: {Name:mkeb9eb5752430ab2d33310b44640ce93b8d2df4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:19:00.985263  994709 start.go:365] duration metric: took 102.298µs to acquireMachinesLock for "addons-693704"
	I1002 20:19:00.985295  994709 start.go:94] Provisioning new machine with config: &{Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:19:00.985372  994709 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:19:00.988832  994709 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 20:19:00.989104  994709 start.go:160] libmachine.API.Create for "addons-693704" (driver="docker")
	I1002 20:19:00.989159  994709 client.go:168] LocalClient.Create starting
	I1002 20:19:00.989296  994709 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem
	I1002 20:19:01.433837  994709 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem
	I1002 20:19:01.564238  994709 cli_runner.go:164] Run: docker network inspect addons-693704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:19:01.580044  994709 cli_runner.go:211] docker network inspect addons-693704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:19:01.580136  994709 network_create.go:284] running [docker network inspect addons-693704] to gather additional debugging logs...
	I1002 20:19:01.580158  994709 cli_runner.go:164] Run: docker network inspect addons-693704
	W1002 20:19:01.596534  994709 cli_runner.go:211] docker network inspect addons-693704 returned with exit code 1
	I1002 20:19:01.596569  994709 network_create.go:287] error running [docker network inspect addons-693704]: docker network inspect addons-693704: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-693704 not found
	I1002 20:19:01.596590  994709 network_create.go:289] output of [docker network inspect addons-693704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-693704 not found
	
	** /stderr **
	I1002 20:19:01.596688  994709 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:19:01.612608  994709 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018f17c0}
	I1002 20:19:01.612647  994709 network_create.go:124] attempt to create docker network addons-693704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:19:01.612711  994709 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-693704 addons-693704
	I1002 20:19:01.677264  994709 network_create.go:108] docker network addons-693704 192.168.49.0/24 created
	I1002 20:19:01.677303  994709 kic.go:121] calculated static IP "192.168.49.2" for the "addons-693704" container
	I1002 20:19:01.677378  994709 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:19:01.693107  994709 cli_runner.go:164] Run: docker volume create addons-693704 --label name.minikube.sigs.k8s.io=addons-693704 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:19:01.711600  994709 oci.go:103] Successfully created a docker volume addons-693704
	I1002 20:19:01.711704  994709 cli_runner.go:164] Run: docker run --rm --name addons-693704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-693704 --entrypoint /usr/bin/test -v addons-693704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:19:02.731832  994709 cli_runner.go:217] Completed: docker run --rm --name addons-693704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-693704 --entrypoint /usr/bin/test -v addons-693704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (1.020058685s)
	I1002 20:19:02.731865  994709 oci.go:107] Successfully prepared a docker volume addons-693704
	I1002 20:19:02.731897  994709 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:19:02.731915  994709 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:19:02.731979  994709 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-693704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:19:07.072259  994709 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-693704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.340238594s)
	I1002 20:19:07.072312  994709 kic.go:203] duration metric: took 4.340372991s to extract preloaded images to volume ...
	W1002 20:19:07.072445  994709 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 20:19:07.072554  994709 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:19:07.131614  994709 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-693704 --name addons-693704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-693704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-693704 --network addons-693704 --ip 192.168.49.2 --volume addons-693704:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:19:07.425756  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Running}}
	I1002 20:19:07.450427  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:07.471353  994709 cli_runner.go:164] Run: docker exec addons-693704 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:19:07.519322  994709 oci.go:144] the created container "addons-693704" has a running status.
	I1002 20:19:07.519348  994709 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa...
	I1002 20:19:07.874970  994709 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:19:07.902253  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:07.924631  994709 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:19:07.924649  994709 kic_runner.go:114] Args: [docker exec --privileged addons-693704 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:19:07.982879  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:08.009002  994709 machine.go:93] provisionDockerMachine start ...
	I1002 20:19:08.009096  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:08.026925  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:08.027256  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:08.027273  994709 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:19:08.027902  994709 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 20:19:11.161848  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-693704
	
	I1002 20:19:11.161874  994709 ubuntu.go:182] provisioning hostname "addons-693704"
	I1002 20:19:11.161998  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.180011  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.180318  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:11.180334  994709 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-693704 && echo "addons-693704" | sudo tee /etc/hostname
	I1002 20:19:11.318599  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-693704
	
	I1002 20:19:11.318673  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.334766  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.335074  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:11.335095  994709 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-693704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-693704/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-693704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:19:11.466309  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:19:11.466378  994709 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 20:19:11.466405  994709 ubuntu.go:190] setting up certificates
	I1002 20:19:11.466416  994709 provision.go:84] configureAuth start
	I1002 20:19:11.466491  994709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-693704
	I1002 20:19:11.484411  994709 provision.go:143] copyHostCerts
	I1002 20:19:11.484497  994709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 20:19:11.484648  994709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 20:19:11.484708  994709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 20:19:11.484757  994709 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.addons-693704 san=[127.0.0.1 192.168.49.2 addons-693704 localhost minikube]
	I1002 20:19:11.600457  994709 provision.go:177] copyRemoteCerts
	I1002 20:19:11.600526  994709 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:19:11.600571  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.617715  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:11.713831  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 20:19:11.731711  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 20:19:11.748544  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:19:11.765398  994709 provision.go:87] duration metric: took 298.94846ms to configureAuth
	I1002 20:19:11.765428  994709 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:19:11.765610  994709 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:19:11.765720  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.782571  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.782895  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:11.782917  994709 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:19:12.024388  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:19:12.024409  994709 machine.go:96] duration metric: took 4.015387209s to provisionDockerMachine
	I1002 20:19:12.024420  994709 client.go:171] duration metric: took 11.035249443s to LocalClient.Create
	I1002 20:19:12.024430  994709 start.go:168] duration metric: took 11.035328481s to libmachine.API.Create "addons-693704"
	I1002 20:19:12.024438  994709 start.go:294] postStartSetup for "addons-693704" (driver="docker")
	I1002 20:19:12.024448  994709 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:19:12.024531  994709 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:19:12.024581  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.046435  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.145575  994709 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:19:12.148535  994709 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:19:12.148564  994709 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:19:12.148574  994709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 20:19:12.148638  994709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 20:19:12.148666  994709 start.go:297] duration metric: took 124.222688ms for postStartSetup
	I1002 20:19:12.148981  994709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-693704
	I1002 20:19:12.164538  994709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/config.json ...
	I1002 20:19:12.164807  994709 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:19:12.164866  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.181186  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.274914  994709 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:19:12.279510  994709 start.go:129] duration metric: took 11.294122752s to createHost
	I1002 20:19:12.279576  994709 start.go:84] releasing machines lock for "addons-693704", held for 11.294297786s
	I1002 20:19:12.279683  994709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-693704
	I1002 20:19:12.298232  994709 ssh_runner.go:195] Run: cat /version.json
	I1002 20:19:12.298284  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.298302  994709 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:19:12.298368  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.327555  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.332727  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.506484  994709 ssh_runner.go:195] Run: systemctl --version
	I1002 20:19:12.512752  994709 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:19:12.553418  994709 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:19:12.557546  994709 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:19:12.557619  994709 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:19:12.586608  994709 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 20:19:12.586633  994709 start.go:496] detecting cgroup driver to use...
	I1002 20:19:12.586667  994709 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:19:12.586718  994709 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:19:12.605523  994709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:19:12.618955  994709 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:19:12.619019  994709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:19:12.636190  994709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:19:12.655245  994709 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:19:12.773294  994709 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:19:12.899674  994709 docker.go:234] disabling docker service ...
	I1002 20:19:12.899796  994709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:19:12.921306  994709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:19:12.935583  994709 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:19:13.058429  994709 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:19:13.191274  994709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:19:13.203980  994709 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:19:13.218083  994709 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:19:13.218172  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.227208  994709 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 20:19:13.227310  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.236115  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.244683  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.253282  994709 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:19:13.260942  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.269710  994709 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.282906  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.291613  994709 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:19:13.298701  994709 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:19:13.306154  994709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:19:13.416108  994709 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:19:13.549800  994709 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:19:13.549963  994709 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:19:13.553947  994709 start.go:564] Will wait 60s for crictl version
	I1002 20:19:13.554015  994709 ssh_runner.go:195] Run: which crictl
	I1002 20:19:13.557729  994709 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:19:13.584434  994709 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:19:13.584598  994709 ssh_runner.go:195] Run: crio --version
	I1002 20:19:13.611885  994709 ssh_runner.go:195] Run: crio --version
	I1002 20:19:13.643761  994709 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:19:13.646706  994709 cli_runner.go:164] Run: docker network inspect addons-693704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:19:13.662159  994709 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:19:13.665953  994709 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:19:13.675384  994709 kubeadm.go:883] updating cluster {Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:19:13.675498  994709 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:19:13.675559  994709 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:19:13.707568  994709 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:19:13.707592  994709 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:19:13.707650  994709 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:19:13.733091  994709 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:19:13.733117  994709 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:19:13.733126  994709 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:19:13.733260  994709 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-693704 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:19:13.733342  994709 ssh_runner.go:195] Run: crio config
	I1002 20:19:13.792130  994709 cni.go:84] Creating CNI manager for ""
	I1002 20:19:13.792153  994709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:19:13.792194  994709 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:19:13.792227  994709 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-693704 NodeName:addons-693704 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:19:13.792401  994709 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-693704"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:19:13.792492  994709 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:19:13.800668  994709 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:19:13.800767  994709 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:19:13.808293  994709 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 20:19:13.821242  994709 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:19:13.834169  994709 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1002 20:19:13.846928  994709 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:19:13.850566  994709 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:19:13.860224  994709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:19:13.968588  994709 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:19:13.985352  994709 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704 for IP: 192.168.49.2
	I1002 20:19:13.985422  994709 certs.go:195] generating shared ca certs ...
	I1002 20:19:13.985470  994709 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:13.985658  994709 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 20:19:15.330293  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt ...
	I1002 20:19:15.330325  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt: {Name:mk4cd3e6dd08eb98d92774a50706472e7144a029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.330529  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key ...
	I1002 20:19:15.330543  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key: {Name:mk973528442a241534dab3b3f10010ef617c41eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.330647  994709 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 20:19:15.997150  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt ...
	I1002 20:19:15.997181  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt: {Name:mk99f3de897f678c1a5844576ab27113951f2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.997373  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key ...
	I1002 20:19:15.997386  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key: {Name:mka357a75cbeebaba7cc94478a077ee2190bafb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.997484  994709 certs.go:257] generating profile certs ...
	I1002 20:19:15.997541  994709 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.key
	I1002 20:19:15.997561  994709 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt with IP's: []
	I1002 20:19:16.185268  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt ...
	I1002 20:19:16.185298  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: {Name:mk19c4790d2aed31a89cf09dcf81ae3f076c409b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.185485  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.key ...
	I1002 20:19:16.185498  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.key: {Name:mk1b58c21fd0fb98ae80d1aeead9a8a2c7b84f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.185581  994709 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d
	I1002 20:19:16.185600  994709 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:19:16.909759  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d ...
	I1002 20:19:16.909792  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d: {Name:mkcdcc8a35d2bead0bc666b364b50007c53b8ec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.910784  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d ...
	I1002 20:19:16.910803  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d: {Name:mk54e705787535bd0f02f9a6cb06ac271457b26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.911454  994709 certs.go:382] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt
	I1002 20:19:16.911552  994709 certs.go:386] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key
	I1002 20:19:16.911609  994709 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key
	I1002 20:19:16.911632  994709 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt with IP's: []
	I1002 20:19:17.189632  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt ...
	I1002 20:19:17.189663  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt: {Name:mkc2967e5b8de8de5ffc244b2174ce7d1307c7fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:17.189855  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key ...
	I1002 20:19:17.189870  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key: {Name:mk3a5d9aa39ed72b68b1236fc674f044b595f3a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:17.190670  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 20:19:17.190720  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 20:19:17.190746  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:19:17.190775  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 20:19:17.191345  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:19:17.209222  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:19:17.228051  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:19:17.245976  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 20:19:17.263876  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 20:19:17.281588  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:19:17.300066  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:19:17.317623  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:19:17.335889  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:19:17.355499  994709 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:19:17.368597  994709 ssh_runner.go:195] Run: openssl version
	I1002 20:19:17.375290  994709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:19:17.383559  994709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:19:17.387356  994709 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:19:17.387462  994709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:19:17.428204  994709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:19:17.436613  994709 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:19:17.440314  994709 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:19:17.440367  994709 kubeadm.go:400] StartCluster: {Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:19:17.440454  994709 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:19:17.440516  994709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:19:17.467595  994709 cri.go:89] found id: ""
	I1002 20:19:17.467677  994709 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:19:17.475494  994709 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:19:17.483312  994709 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:19:17.483390  994709 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:19:17.491411  994709 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:19:17.491431  994709 kubeadm.go:157] found existing configuration files:
	
	I1002 20:19:17.491483  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:19:17.499089  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:19:17.499169  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:19:17.506794  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:19:17.514714  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:19:17.514785  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:19:17.522181  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:19:17.530993  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:19:17.531060  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:19:17.538976  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:19:17.546795  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:19:17.546892  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:19:17.554492  994709 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:19:17.596193  994709 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:19:17.596303  994709 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:19:17.627320  994709 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:19:17.627397  994709 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 20:19:17.627440  994709 kubeadm.go:318] OS: Linux
	I1002 20:19:17.627493  994709 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:19:17.627548  994709 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 20:19:17.627604  994709 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:19:17.627659  994709 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:19:17.627714  994709 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:19:17.627769  994709 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:19:17.627820  994709 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:19:17.627872  994709 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:19:17.627924  994709 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 20:19:17.698891  994709 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:19:17.699015  994709 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:19:17.699132  994709 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:19:17.708645  994709 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:19:17.711822  994709 out.go:252]   - Generating certificates and keys ...
	I1002 20:19:17.711957  994709 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:19:17.712048  994709 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:19:17.858214  994709 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:19:19.472133  994709 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:19:19.853869  994709 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:19:20.278527  994709 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:19:21.038810  994709 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:19:21.039005  994709 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-693704 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:19:21.583298  994709 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:19:21.583465  994709 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-693704 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:19:22.178821  994709 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:19:22.869729  994709 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:19:23.067072  994709 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:19:23.067180  994709 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:19:23.190079  994709 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:19:23.633624  994709 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:19:23.861907  994709 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:19:24.252326  994709 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:19:24.757359  994709 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:19:24.758089  994709 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:19:24.760711  994709 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:19:24.764198  994709 out.go:252]   - Booting up control plane ...
	I1002 20:19:24.764310  994709 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:19:24.764403  994709 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:19:24.764489  994709 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:19:24.780867  994709 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:19:24.781188  994709 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:19:24.788581  994709 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:19:24.789049  994709 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:19:24.789397  994709 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:19:24.926323  994709 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:19:24.926459  994709 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:19:26.427259  994709 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501639322s
	I1002 20:19:26.430848  994709 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:19:26.430969  994709 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:19:26.431069  994709 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:19:26.431155  994709 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:19:28.445585  994709 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.013932999s
	I1002 20:19:30.026061  994709 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.595131543s
	I1002 20:19:31.934100  994709 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.501085496s
	I1002 20:19:31.955369  994709 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 20:19:31.978849  994709 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 20:19:32.006745  994709 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 20:19:32.007240  994709 kubeadm.go:318] [mark-control-plane] Marking the node addons-693704 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 20:19:32.024906  994709 kubeadm.go:318] [bootstrap-token] Using token: 1gg1hv.lld6lawd4ni62mxk
	I1002 20:19:32.028031  994709 out.go:252]   - Configuring RBAC rules ...
	I1002 20:19:32.028186  994709 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 20:19:32.038937  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 20:19:32.049818  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 20:19:32.054935  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 20:19:32.062162  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 20:19:32.070713  994709 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 20:19:32.338182  994709 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 20:19:32.784741  994709 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 20:19:33.338747  994709 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 20:19:33.340165  994709 kubeadm.go:318] 
	I1002 20:19:33.340273  994709 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 20:19:33.340285  994709 kubeadm.go:318] 
	I1002 20:19:33.340381  994709 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 20:19:33.340391  994709 kubeadm.go:318] 
	I1002 20:19:33.340426  994709 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 20:19:33.340507  994709 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 20:19:33.340581  994709 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 20:19:33.340595  994709 kubeadm.go:318] 
	I1002 20:19:33.340666  994709 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 20:19:33.340674  994709 kubeadm.go:318] 
	I1002 20:19:33.340728  994709 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 20:19:33.340734  994709 kubeadm.go:318] 
	I1002 20:19:33.340801  994709 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 20:19:33.340885  994709 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 20:19:33.340967  994709 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 20:19:33.340973  994709 kubeadm.go:318] 
	I1002 20:19:33.341069  994709 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 20:19:33.341173  994709 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 20:19:33.341179  994709 kubeadm.go:318] 
	I1002 20:19:33.341310  994709 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 1gg1hv.lld6lawd4ni62mxk \
	I1002 20:19:33.341442  994709 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 \
	I1002 20:19:33.341466  994709 kubeadm.go:318] 	--control-plane 
	I1002 20:19:33.341470  994709 kubeadm.go:318] 
	I1002 20:19:33.341572  994709 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 20:19:33.341578  994709 kubeadm.go:318] 
	I1002 20:19:33.341672  994709 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 1gg1hv.lld6lawd4ni62mxk \
	I1002 20:19:33.341797  994709 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 
	I1002 20:19:33.345719  994709 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 20:19:33.345963  994709 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 20:19:33.346097  994709 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:19:33.346131  994709 cni.go:84] Creating CNI manager for ""
	I1002 20:19:33.346146  994709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:19:33.349554  994709 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 20:19:33.352542  994709 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 20:19:33.358001  994709 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 20:19:33.358065  994709 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 20:19:33.375272  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 20:19:33.656465  994709 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:19:33.656564  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:33.656619  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-693704 minikube.k8s.io/updated_at=2025_10_02T20_19_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b minikube.k8s.io/name=addons-693704 minikube.k8s.io/primary=true
	I1002 20:19:33.838722  994709 ops.go:34] apiserver oom_adj: -16
	I1002 20:19:33.838894  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:34.339235  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:34.839327  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:35.339115  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:35.839347  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:36.339936  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:36.838951  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:37.339896  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:37.839301  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:37.981403  994709 kubeadm.go:1113] duration metric: took 4.324906426s to wait for elevateKubeSystemPrivileges
	I1002 20:19:37.981430  994709 kubeadm.go:402] duration metric: took 20.541068078s to StartCluster
	I1002 20:19:37.981448  994709 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:37.982146  994709 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:19:37.982540  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:37.982732  994709 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:19:37.982850  994709 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 20:19:37.983086  994709 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:19:37.983116  994709 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 20:19:37.983227  994709 addons.go:69] Setting yakd=true in profile "addons-693704"
	I1002 20:19:37.983240  994709 addons.go:238] Setting addon yakd=true in "addons-693704"
	I1002 20:19:37.983262  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.983805  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.983948  994709 addons.go:69] Setting inspektor-gadget=true in profile "addons-693704"
	I1002 20:19:37.983963  994709 addons.go:238] Setting addon inspektor-gadget=true in "addons-693704"
	I1002 20:19:37.983984  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.984372  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.984784  994709 addons.go:69] Setting metrics-server=true in profile "addons-693704"
	I1002 20:19:37.984803  994709 addons.go:238] Setting addon metrics-server=true in "addons-693704"
	I1002 20:19:37.984846  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.985255  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.986812  994709 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-693704"
	I1002 20:19:37.987111  994709 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-693704"
	I1002 20:19:37.987164  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.986986  994709 addons.go:69] Setting cloud-spanner=true in profile "addons-693704"
	I1002 20:19:37.988662  994709 addons.go:238] Setting addon cloud-spanner=true in "addons-693704"
	I1002 20:19:37.988715  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.989206  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.986995  994709 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-693704"
	I1002 20:19:37.992261  994709 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-693704"
	I1002 20:19:37.992347  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.993008  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.993440  994709 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-693704"
	I1002 20:19:37.993470  994709 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-693704"
	I1002 20:19:37.993496  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.993939  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.986999  994709 addons.go:69] Setting default-storageclass=true in profile "addons-693704"
	I1002 20:19:37.999991  994709 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-693704"
	I1002 20:19:37.987003  994709 addons.go:69] Setting gcp-auth=true in profile "addons-693704"
	I1002 20:19:38.001780  994709 mustload.go:65] Loading cluster: addons-693704
	I1002 20:19:38.002068  994709 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:19:38.002442  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.004847  994709 addons.go:69] Setting registry=true in profile "addons-693704"
	I1002 20:19:38.004895  994709 addons.go:238] Setting addon registry=true in "addons-693704"
	I1002 20:19:38.004938  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.006258  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.987015  994709 addons.go:69] Setting ingress=true in profile "addons-693704"
	I1002 20:19:38.027270  994709 addons.go:238] Setting addon ingress=true in "addons-693704"
	I1002 20:19:38.027361  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.027894  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.987020  994709 addons.go:69] Setting ingress-dns=true in profile "addons-693704"
	I1002 20:19:38.058307  994709 addons.go:238] Setting addon ingress-dns=true in "addons-693704"
	I1002 20:19:38.058379  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.058921  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.096850  994709 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 20:19:38.105676  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 20:19:38.105709  994709 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 20:19:38.105842  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.008072  994709 out.go:179] * Verifying Kubernetes components...
	I1002 20:19:38.008152  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.026483  994709 addons.go:69] Setting registry-creds=true in profile "addons-693704"
	I1002 20:19:38.116211  994709 addons.go:238] Setting addon registry-creds=true in "addons-693704"
	I1002 20:19:38.116261  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.116877  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.148060  994709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:19:38.026500  994709 addons.go:69] Setting storage-provisioner=true in profile "addons-693704"
	I1002 20:19:38.148217  994709 addons.go:238] Setting addon storage-provisioner=true in "addons-693704"
	I1002 20:19:38.148254  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.148800  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.026507  994709 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-693704"
	I1002 20:19:38.181689  994709 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-693704"
	I1002 20:19:38.026527  994709 addons.go:69] Setting volcano=true in profile "addons-693704"
	I1002 20:19:38.185000  994709 addons.go:238] Setting addon volcano=true in "addons-693704"
	I1002 20:19:38.185048  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.200337  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.026533  994709 addons.go:69] Setting volumesnapshots=true in profile "addons-693704"
	I1002 20:19:38.221856  994709 addons.go:238] Setting addon volumesnapshots=true in "addons-693704"
	I1002 20:19:38.221908  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.222576  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.234975  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.241128  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 20:19:38.241462  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.027224  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.264137  994709 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 20:19:38.269034  994709 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 20:19:38.269076  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 20:19:38.269173  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.294256  994709 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 20:19:38.298092  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 20:19:38.298232  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 20:19:38.298258  994709 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 20:19:38.298339  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.305328  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 20:19:38.326652  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:19:38.333498  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:19:38.339026  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 20:19:38.339916  994709 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 20:19:38.340074  994709 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 20:19:38.348717  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 20:19:38.349240  994709 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:19:38.349263  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 20:19:38.349335  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.370496  994709 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 20:19:38.370522  994709 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 20:19:38.370590  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.393413  994709 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:19:38.393443  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 20:19:38.393518  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.393705  994709 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 20:19:38.401523  994709 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:19:38.401566  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 20:19:38.401656  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.415528  994709 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 20:19:38.419444  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 20:19:38.424637  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 20:19:38.430455  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 20:19:38.433425  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 20:19:38.434098  994709 out.go:179]   - Using image docker.io/registry:3.0.0
	W1002 20:19:38.437996  994709 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1002 20:19:38.442715  994709 addons.go:238] Setting addon default-storageclass=true in "addons-693704"
	I1002 20:19:38.442755  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.443165  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.443728  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.447652  994709 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 20:19:38.447679  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 20:19:38.447744  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.463660  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.464460  994709 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:19:38.466815  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 20:19:38.467693  994709 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:19:38.467719  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:19:38.467819  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.470864  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 20:19:38.470890  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 20:19:38.470960  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.500926  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.502016  994709 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 20:19:38.503153  994709 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 20:19:38.510195  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 20:19:38.510222  994709 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 20:19:38.510304  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.511213  994709 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 20:19:38.512545  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.514344  994709 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-693704"
	I1002 20:19:38.514385  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.514794  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.538485  994709 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:19:38.538505  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 20:19:38.538577  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.563237  994709 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:19:38.563266  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 20:19:38.563330  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.573905  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.605278  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.621692  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.637902  994709 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:19:38.637933  994709 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:19:38.638002  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.655698  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.682118  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.689646  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.707346  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.731079  994709 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 20:19:38.738329  994709 out.go:179]   - Using image docker.io/busybox:stable
	I1002 20:19:38.738517  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.739582  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	W1002 20:19:38.741646  994709 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:19:38.741686  994709 retry.go:31] will retry after 354.664397ms: ssh: handshake failed: EOF
	I1002 20:19:38.741822  994709 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:19:38.741834  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 20:19:38.741914  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.754174  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.790638  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	W1002 20:19:38.791850  994709 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:19:38.791874  994709 retry.go:31] will retry after 168.291026ms: ssh: handshake failed: EOF
	I1002 20:19:38.891518  994709 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1002 20:19:38.961324  994709 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:19:38.961355  994709 retry.go:31] will retry after 311.734351ms: ssh: handshake failed: EOF
	I1002 20:19:39.180793  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 20:19:39.180831  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 20:19:39.246769  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 20:19:39.246793  994709 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 20:19:39.317148  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 20:19:39.317174  994709 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 20:19:39.327274  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 20:19:39.369305  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:19:39.371258  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:19:39.386300  994709 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 20:19:39.386327  994709 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 20:19:39.412476  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 20:19:39.412502  994709 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 20:19:39.447295  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:19:39.454691  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 20:19:39.454712  994709 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 20:19:39.483532  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:19:39.489546  994709 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:39.489572  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 20:19:39.600950  994709 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 20:19:39.600977  994709 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 20:19:39.608088  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:19:39.608113  994709 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 20:19:39.625123  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 20:19:39.625149  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 20:19:39.646231  994709 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:19:39.646256  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 20:19:39.666494  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:39.667190  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:19:39.667209  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 20:19:39.670888  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:19:39.686238  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:19:39.763670  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:19:39.778706  994709 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 20:19:39.778734  994709 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 20:19:39.800126  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:19:39.803147  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:19:39.824074  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 20:19:39.824103  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 20:19:39.826926  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:19:39.887787  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:19:39.970247  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 20:19:39.970276  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 20:19:39.982837  994709 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 20:19:39.982863  994709 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 20:19:40.095977  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 20:19:40.096005  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 20:19:40.202267  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 20:19:40.202301  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 20:19:40.252464  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 20:19:40.252492  994709 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 20:19:40.425953  994709 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:19:40.425979  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 20:19:40.440769  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 20:19:40.440793  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 20:19:40.651360  994709 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.759801869s)
	I1002 20:19:40.651360  994709 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.140115117s)
	I1002 20:19:40.651466  994709 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 20:19:40.652113  994709 node_ready.go:35] waiting up to 6m0s for node "addons-693704" to be "Ready" ...
	I1002 20:19:40.708925  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:19:40.740283  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 20:19:40.740311  994709 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 20:19:41.000182  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 20:19:41.000218  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 20:19:41.157742  994709 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-693704" context rescaled to 1 replicas
	I1002 20:19:41.160542  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 20:19:41.160566  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 20:19:41.368904  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 20:19:41.368930  994709 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 20:19:41.434210  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.106899571s)
	I1002 20:19:41.434277  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.064948233s)
	I1002 20:19:41.546392  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1002 20:19:42.681278  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:44.305558  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.934264933s)
	I1002 20:19:44.305591  994709 addons.go:479] Verifying addon ingress=true in "addons-693704"
	I1002 20:19:44.305742  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.858421462s)
	I1002 20:19:44.305803  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.822248913s)
	I1002 20:19:44.306140  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.639611107s)
	W1002 20:19:44.306168  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:44.306190  994709 retry.go:31] will retry after 271.617135ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:44.306249  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.635338018s)
	I1002 20:19:44.306301  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.62003767s)
	I1002 20:19:44.306341  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.542651372s)
	I1002 20:19:44.306505  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.506354272s)
	I1002 20:19:44.306533  994709 addons.go:479] Verifying addon registry=true in "addons-693704"
	I1002 20:19:44.306707  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.503533192s)
	I1002 20:19:44.306720  994709 addons.go:479] Verifying addon metrics-server=true in "addons-693704"
	I1002 20:19:44.306759  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.479800741s)
	I1002 20:19:44.307143  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.41932494s)
	I1002 20:19:44.307220  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.598267016s)
	W1002 20:19:44.307774  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:19:44.307787  994709 retry.go:31] will retry after 292.505551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:19:44.308765  994709 out.go:179] * Verifying ingress addon...
	I1002 20:19:44.312945  994709 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-693704 service yakd-dashboard -n yakd-dashboard
	
	I1002 20:19:44.313054  994709 out.go:179] * Verifying registry addon...
	I1002 20:19:44.315485  994709 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 20:19:44.317462  994709 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 20:19:44.330428  994709 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 20:19:44.330450  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:44.330653  994709 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 20:19:44.330663  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 20:19:44.357589  994709 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 20:19:44.577967  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:44.601481  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:19:44.645691  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.099252349s)
	I1002 20:19:44.645728  994709 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-693704"
	I1002 20:19:44.650504  994709 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 20:19:44.655039  994709 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 20:19:44.667816  994709 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 20:19:44.667846  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:44.821715  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:44.822383  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 20:19:45.161026  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:45.165268  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:45.325696  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:45.325851  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:45.657820  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:45.818501  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:45.820022  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:45.829170  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.25115874s)
	W1002 20:19:45.829204  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:45.829226  994709 retry.go:31] will retry after 265.136863ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:45.829298  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.227785836s)
	I1002 20:19:45.919439  994709 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 20:19:45.919542  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:45.937711  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:46.064145  994709 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 20:19:46.077198  994709 addons.go:238] Setting addon gcp-auth=true in "addons-693704"
	I1002 20:19:46.077246  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:46.077691  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:46.095085  994709 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 20:19:46.095135  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:46.095095  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:46.123058  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:46.164369  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:46.319756  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:46.321805  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:46.659517  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:46.818237  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:46.819904  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 20:19:46.919069  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:46.919103  994709 retry.go:31] will retry after 624.133237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:46.922816  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:19:46.925777  994709 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 20:19:46.928684  994709 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 20:19:46.928707  994709 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 20:19:46.942491  994709 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 20:19:46.942514  994709 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 20:19:46.955438  994709 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:19:46.955505  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 20:19:46.968124  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:19:47.157960  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:47.322368  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:47.322695  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:47.436497  994709 addons.go:479] Verifying addon gcp-auth=true in "addons-693704"
	I1002 20:19:47.440771  994709 out.go:179] * Verifying gcp-auth addon...
	I1002 20:19:47.444303  994709 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 20:19:47.456952  994709 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 20:19:47.457022  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:47.544036  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:19:47.655544  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:47.657853  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:47.819482  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:47.821740  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:47.947877  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:48.158799  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:48.321611  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:48.322176  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 20:19:48.351318  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:48.351351  994709 retry.go:31] will retry after 722.588456ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:48.447412  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:48.658545  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:48.819500  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:48.821008  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:48.947811  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:49.074176  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:49.159044  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:49.319369  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:49.321354  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:49.447565  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:49.655967  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:49.657396  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:49.821534  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:49.821767  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 20:19:49.880261  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:49.880299  994709 retry.go:31] will retry after 823.045422ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:49.948030  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:50.158812  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:50.318859  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:50.321025  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:50.448207  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:50.657430  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:50.703742  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:50.819118  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:50.821057  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:50.948368  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:51.157785  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:51.320463  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:51.321544  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:51.448039  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:51.519077  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:51.519109  994709 retry.go:31] will retry after 1.329942428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:51.658147  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:51.820515  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:51.820951  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:51.947804  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:52.155980  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:52.158167  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:52.319637  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:52.321091  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:52.448243  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:52.657697  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:52.819249  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:52.821572  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:52.849787  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:52.949420  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:53.160825  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:53.319057  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:53.321137  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:53.448348  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:53.651601  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:53.651634  994709 retry.go:31] will retry after 4.065518596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:53.657468  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:53.820524  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:53.821033  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:53.948075  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:54.157447  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:54.158479  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:54.318431  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:54.320091  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:54.447825  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:54.657905  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:54.819025  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:54.820709  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:54.947593  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:55.158249  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:55.320256  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:55.320691  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:55.447448  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:55.658171  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:55.820678  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:55.821069  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:55.948074  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:56.157411  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:56.319659  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:56.320449  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:56.447640  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:56.655854  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:56.657871  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:56.818780  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:56.820792  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:56.947591  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:57.157766  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:57.318816  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:57.320927  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:57.447823  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:57.657501  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:57.717603  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:57.820669  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:57.822065  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:57.948192  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:58.157875  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:58.319486  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:58.321536  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:58.447507  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:58.508047  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:58.508078  994709 retry.go:31] will retry after 6.392155287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:58.657525  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:58.818599  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:58.820265  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:58.947800  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:59.155950  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:59.158057  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:59.321355  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:59.321502  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:59.447568  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:59.657515  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:59.818527  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:59.820423  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:59.947158  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:00.191965  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:00.322779  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:00.323712  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:00.462450  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:00.662487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:00.820978  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:00.821119  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:00.947103  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:01.165936  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:01.319105  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:01.321152  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:01.448705  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:01.656452  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:01.660465  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:01.820149  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:01.822237  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:01.949425  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:02.159485  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:02.320094  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:02.320855  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:02.447847  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:02.658087  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:02.822950  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:02.823232  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:02.948025  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:03.158590  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:03.318905  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:03.321355  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:03.447723  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:03.657871  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:03.821238  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:03.821662  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:03.947536  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:04.157181  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:04.158586  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:04.319406  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:04.320569  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:04.448026  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:04.657883  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:04.821087  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:04.821316  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:04.900418  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:04.947850  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:05.159494  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:05.319260  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:05.321183  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:05.448018  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:05.659872  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 20:20:05.704226  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:05.704266  994709 retry.go:31] will retry after 4.650395594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:05.819910  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:05.820237  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:05.947300  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:06.157427  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:06.158681  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:06.319989  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:06.321509  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:06.447503  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:06.658321  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:06.819075  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:06.820269  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:06.948556  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:07.158188  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:07.319456  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:07.320273  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:07.448160  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:07.657768  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:07.820523  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:07.821011  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:07.947761  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:08.157867  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:08.323022  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:08.323328  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:08.447949  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:08.655164  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:08.657821  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:08.820915  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:08.822270  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:08.947285  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:09.157631  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:09.319269  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:09.320630  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:09.447999  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:09.657541  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:09.821314  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:09.821825  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:09.947519  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:10.158695  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:10.320550  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:10.322127  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:10.355287  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:10.448320  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:10.655677  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:10.658684  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:10.819582  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:10.820893  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:10.948135  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:11.160067  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 20:20:11.205481  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:11.205529  994709 retry.go:31] will retry after 8.886793783s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:11.319286  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:11.320699  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:11.447959  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:11.658932  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:11.818675  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:11.820427  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:11.947127  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:12.157818  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:12.319903  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:12.320793  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:12.447987  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:12.657853  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:12.819021  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:12.820692  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:12.947551  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:13.156319  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:13.159173  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:13.319051  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:13.321143  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:13.448160  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:13.657596  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:13.820773  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:13.820960  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:13.948072  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:14.158231  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:14.319445  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:14.320543  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:14.447788  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:14.658082  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:14.819689  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:14.821091  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:14.948202  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:15.157836  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:15.319547  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:15.321065  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:15.448065  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:15.654975  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:15.658703  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:15.819187  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:15.823588  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:15.947274  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:16.158585  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:16.318872  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:16.321029  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:16.448029  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:16.658178  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:16.819331  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:16.819902  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:16.947835  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:17.158511  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:17.319014  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:17.320821  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:17.447892  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:17.658439  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:17.818480  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:17.820595  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:17.947741  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:18.157451  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:18.159031  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:18.320870  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:18.321273  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:18.448214  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:18.658565  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:18.819116  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:18.821998  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:18.948071  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:19.175178  994709 node_ready.go:49] node "addons-693704" is "Ready"
	I1002 20:20:19.175210  994709 node_ready.go:38] duration metric: took 38.523057861s for node "addons-693704" to be "Ready" ...
	I1002 20:20:19.175224  994709 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:20:19.175288  994709 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:19.193541  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:19.198169  994709 api_server.go:72] duration metric: took 41.215410635s to wait for apiserver process to appear ...
	I1002 20:20:19.198244  994709 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:20:19.198278  994709 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 20:20:19.210833  994709 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 20:20:19.213021  994709 api_server.go:141] control plane version: v1.34.1
	I1002 20:20:19.213118  994709 api_server.go:131] duration metric: took 14.852434ms to wait for apiserver health ...
	I1002 20:20:19.213143  994709 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:20:19.259918  994709 system_pods.go:59] 18 kube-system pods found
	I1002 20:20:19.260007  994709 system_pods.go:61] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending
	I1002 20:20:19.260029  994709 system_pods.go:61] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending
	I1002 20:20:19.260046  994709 system_pods.go:61] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending
	I1002 20:20:19.260082  994709 system_pods.go:61] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:19.260110  994709 system_pods.go:61] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:19.260130  994709 system_pods.go:61] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:19.260165  994709 system_pods.go:61] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:19.260195  994709 system_pods.go:61] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 20:20:19.260219  994709 system_pods.go:61] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:19.260254  994709 system_pods.go:61] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:19.260278  994709 system_pods.go:61] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending
	I1002 20:20:19.260300  994709 system_pods.go:61] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending
	I1002 20:20:19.260337  994709 system_pods.go:61] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending
	I1002 20:20:19.260361  994709 system_pods.go:61] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending
	I1002 20:20:19.260379  994709 system_pods.go:61] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending
	I1002 20:20:19.260414  994709 system_pods.go:61] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending
	I1002 20:20:19.260436  994709 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending
	I1002 20:20:19.260455  994709 system_pods.go:61] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending
	I1002 20:20:19.260473  994709 system_pods.go:74] duration metric: took 47.310617ms to wait for pod list to return data ...
	I1002 20:20:19.260513  994709 default_sa.go:34] waiting for default service account to be created ...
	I1002 20:20:19.273557  994709 default_sa.go:45] found service account: "default"
	I1002 20:20:19.273635  994709 default_sa.go:55] duration metric: took 13.103031ms for default service account to be created ...
	I1002 20:20:19.273660  994709 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 20:20:19.293816  994709 system_pods.go:86] 18 kube-system pods found
	I1002 20:20:19.293898  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending
	I1002 20:20:19.293920  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending
	I1002 20:20:19.293938  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending
	I1002 20:20:19.293975  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:19.294002  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:19.294023  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:19.294068  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:19.294095  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending
	I1002 20:20:19.294114  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:19.294148  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:19.294173  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending
	I1002 20:20:19.294198  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending
	I1002 20:20:19.294246  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending
	I1002 20:20:19.294273  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending
	I1002 20:20:19.294296  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending
	I1002 20:20:19.294328  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending
	I1002 20:20:19.294351  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending
	I1002 20:20:19.294370  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending
	I1002 20:20:19.294416  994709 retry.go:31] will retry after 259.220758ms: missing components: kube-dns
	I1002 20:20:19.349532  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:19.350103  994709 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 20:20:19.350175  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:19.523669  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:19.643831  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:19.643867  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending
	I1002 20:20:19.643879  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:19.643887  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:19.643893  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending
	I1002 20:20:19.643899  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:19.643904  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:19.643909  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:19.643918  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:19.643923  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending
	I1002 20:20:19.643931  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:19.643935  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:19.643940  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending
	I1002 20:20:19.643944  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending
	I1002 20:20:19.643948  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending
	I1002 20:20:19.643961  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending
	I1002 20:20:19.643965  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending
	I1002 20:20:19.643972  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:19.643980  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending
	I1002 20:20:19.643985  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending
	I1002 20:20:19.644006  994709 retry.go:31] will retry after 341.024008ms: missing components: kube-dns
	I1002 20:20:19.671892  994709 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 20:20:19.671917  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:19.827024  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:19.828000  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:19.961916  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:20.012275  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:20.012323  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:20:20.012334  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:20.012342  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:20.012350  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:20.012356  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:20.012362  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:20.012372  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:20.012377  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:20.012388  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:20.012400  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:20.012405  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:20.012412  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:20.012423  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:20.012429  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:20.012437  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:20.012448  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:20.012455  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.012463  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.012473  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 20:20:20.012491  994709 retry.go:31] will retry after 476.605934ms: missing components: kube-dns
	I1002 20:20:20.092973  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:20.160870  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:20.323333  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:20.326140  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:20.449179  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:20.500973  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:20.501060  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:20:20.501104  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:20.501129  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:20.501166  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:20.501192  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:20.501214  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:20.501249  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:20.501273  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:20.501296  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:20.501332  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:20.501358  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:20.501381  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:20.501417  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:20.501444  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:20.501467  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:20.501502  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:20.501531  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.501554  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.501589  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Running
	I1002 20:20:20.501625  994709 retry.go:31] will retry after 439.708141ms: missing components: kube-dns
	I1002 20:20:20.672849  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:20.819664  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:20.823622  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:20.948959  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:20.951441  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:20.951521  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:20:20.951545  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:20.951570  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:20.951663  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:20.951686  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:20.951728  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:20.951751  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:20.951769  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:20.951805  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:20.951826  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:20.951847  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:20.951883  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:20.951908  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:20.951932  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:20.951970  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:20.951997  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:20.952021  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.952055  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.952078  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Running
	I1002 20:20:20.952108  994709 retry.go:31] will retry after 739.124115ms: missing components: kube-dns
	I1002 20:20:21.175706  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:21.321496  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:21.322173  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:21.447868  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:21.558307  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.465295653s)
	W1002 20:20:21.558346  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:21.558363  994709 retry.go:31] will retry after 14.276526589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:21.659390  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:21.696852  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:21.696889  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Running
	I1002 20:20:21.696903  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:21.696912  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:21.696919  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:21.696928  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:21.696933  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:21.696952  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:21.696957  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:21.696969  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:21.696973  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:21.696977  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:21.696984  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:21.696990  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:21.696997  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:21.697004  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:21.697010  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:21.697017  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:21.697023  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:21.697030  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Running
	I1002 20:20:21.697039  994709 system_pods.go:126] duration metric: took 2.42335813s to wait for k8s-apps to be running ...
	I1002 20:20:21.697049  994709 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:20:21.697109  994709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:20:21.712608  994709 system_svc.go:56] duration metric: took 15.548645ms WaitForService to wait for kubelet
	I1002 20:20:21.712637  994709 kubeadm.go:586] duration metric: took 43.729883809s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:20:21.712662  994709 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:20:21.716152  994709 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 20:20:21.716184  994709 node_conditions.go:123] node cpu capacity is 2
	I1002 20:20:21.716196  994709 node_conditions.go:105] duration metric: took 3.528491ms to run NodePressure ...
	I1002 20:20:21.716212  994709 start.go:242] waiting for startup goroutines ...
	I1002 20:20:21.822012  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:21.823203  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:21.948612  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:22.159863  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:22.319122  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:22.321160  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:22.448407  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:22.659710  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:22.819576  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:22.822386  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:22.948013  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:23.158517  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:23.320332  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:23.321199  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:23.448043  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:23.658814  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:23.819698  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:23.821542  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:23.947452  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:24.159652  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:24.320759  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:24.321094  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:24.448153  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:24.659358  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:24.818645  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:24.821517  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:24.947484  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:25.159952  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:25.321433  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:25.321885  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:25.447985  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:25.658784  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:25.819014  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:25.821666  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:25.948082  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:26.158745  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:26.320197  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:26.321222  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:26.447719  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:26.659182  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:26.820428  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:26.822051  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:26.948367  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:27.160977  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:27.320573  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:27.321652  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:27.447890  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:27.658939  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:27.818985  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:27.821059  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:27.948366  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:28.161780  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:28.320321  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:28.321410  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:28.447506  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:28.658747  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:28.818976  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:28.821650  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:28.947845  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:29.159622  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:29.319270  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:29.321801  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:29.448168  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:29.658794  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:29.819079  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:29.821429  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:29.947641  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:30.159369  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:30.321561  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:30.321972  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:30.450696  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:30.659510  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:30.819828  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:30.821734  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:30.948076  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:31.159094  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:31.321697  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:31.322081  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:31.448086  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:31.658821  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:31.818887  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:31.821458  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:31.947963  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:32.159614  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:32.320675  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:32.322256  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:32.447303  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:32.659923  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:32.820647  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:32.822321  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:32.947394  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:33.159274  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:33.321237  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:33.321628  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:33.448072  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:33.658574  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:33.818908  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:33.821510  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:33.947671  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:34.159537  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:34.319486  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:34.320732  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:34.447992  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:34.659409  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:34.818851  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:34.821162  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:34.948557  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:35.160400  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:35.319095  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:35.321775  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:35.448790  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:35.659951  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:35.821520  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:35.823605  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:35.835876  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:35.949194  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:36.163303  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:36.369184  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:36.369321  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:36.447567  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:36.659548  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:36.819011  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:36.821548  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:36.947353  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:37.013829  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.177912886s)
	W1002 20:20:37.013873  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:37.013894  994709 retry.go:31] will retry after 16.584617559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:37.159246  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:37.320047  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:37.320218  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:37.447625  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:37.659969  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:37.819508  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:37.822005  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:37.948056  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:38.159157  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:38.319619  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:38.321829  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:38.448325  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:38.659094  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:38.819553  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:38.822084  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:38.948224  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:39.158955  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:39.320358  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:39.321896  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:39.449482  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:39.658678  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:39.819596  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:39.822618  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:39.948042  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:40.159165  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:40.321897  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:40.322102  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:40.448692  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:40.659424  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:40.820442  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:40.822438  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:40.953063  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:41.160230  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:41.324908  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:41.325018  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:41.448365  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:41.659981  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:41.819204  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:41.825800  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:41.948326  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:42.160221  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:42.323678  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:42.323892  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:42.448685  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:42.658968  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:42.820548  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:42.825595  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:42.948014  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:43.164487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:43.325308  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:43.325546  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:43.447728  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:43.659083  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:43.819978  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:43.821102  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:43.948368  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:44.159319  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:44.322007  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:44.323000  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:44.448438  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:44.658701  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:44.818251  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:44.822093  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:44.948073  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:45.161234  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:45.337364  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:45.337615  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:45.448555  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:45.659203  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:45.820630  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:45.822020  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:45.948309  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:46.158793  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:46.322305  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:46.323889  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:46.449028  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:46.658214  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:46.821838  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:46.822319  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:46.948024  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:47.168388  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:47.319302  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:47.321739  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:47.447694  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:47.659702  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:47.818326  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:47.821106  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:47.948063  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:48.159478  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:48.321404  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:48.321977  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:48.448403  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:48.658631  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:48.818698  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:48.820834  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:48.947578  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:49.159437  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:49.321139  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:49.321707  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:49.447554  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:49.659009  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:49.819029  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:49.821580  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:49.947616  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:50.160129  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:50.320228  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:50.321534  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:50.451851  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:50.660002  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:50.820960  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:50.822905  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:50.947934  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:51.161193  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:51.320670  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:51.320931  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:51.447529  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:51.672034  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:51.823387  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:51.823949  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:51.949349  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:52.159584  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:52.321246  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:52.323112  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:52.450831  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:52.661759  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:52.819601  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:52.822812  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:52.948147  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:53.158260  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:53.320954  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:53.321416  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:53.447684  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:53.598745  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:53.658921  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:53.822095  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:53.822140  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:53.948027  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:54.159720  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:54.319139  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:54.323475  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:54.449052  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:54.659950  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:54.800080  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.201286682s)
	W1002 20:20:54.800158  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:54.800190  994709 retry.go:31] will retry after 36.238432013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:54.821361  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:54.822118  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:54.948234  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:55.160177  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:55.319580  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:55.323520  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:55.447562  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:55.659028  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:55.820055  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:55.822888  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:55.948043  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:56.160147  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:56.320399  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:56.322153  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:56.448568  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:56.662690  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:56.822552  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:56.822724  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:56.948654  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:57.165959  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:57.323611  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:57.324125  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:57.448839  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:57.659243  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:57.827311  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:57.827796  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:57.951325  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:58.160073  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:58.325194  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:58.325637  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:58.449778  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:58.663289  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:58.823656  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:58.824142  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:58.951729  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:59.159992  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:59.320856  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:59.322241  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:59.451389  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:59.659448  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:59.824351  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:59.824752  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:59.948244  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:00.178734  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:00.334811  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:00.335334  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:00.449977  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:00.660186  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:00.819874  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:00.820185  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:00.948376  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:01.159525  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:01.325608  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:01.326800  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:01.448685  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:01.660941  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:01.819636  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:01.822396  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:01.947837  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:02.160841  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:02.319889  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:02.323200  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:02.447592  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:02.663926  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:02.819507  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:02.822454  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:02.948180  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:03.158836  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:03.320854  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:03.322443  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:03.447975  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:03.658196  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:03.823965  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:03.824515  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:03.947809  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:04.160130  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:04.319792  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:04.320970  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:04.458399  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:04.659641  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:04.819337  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:04.821346  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:04.948487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:05.159402  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:05.318537  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:05.320782  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:05.447768  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:05.659047  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:05.820074  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:05.821224  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:05.948044  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:06.158918  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:06.319264  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:06.321170  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:06.448425  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:06.661071  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:06.819015  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:06.821112  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:06.948418  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:07.159287  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:07.320880  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:07.322732  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:07.448299  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:07.659089  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:07.833876  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:07.834240  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:07.948415  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:08.158976  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:08.320300  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:08.320874  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:08.448633  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:08.659076  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:08.820477  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:08.820621  994709 kapi.go:107] duration metric: took 1m24.50316116s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 20:21:08.948034  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:09.158956  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:09.319324  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:09.447625  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:09.660083  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:09.826440  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:09.949323  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:10.163992  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:10.320103  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:10.449195  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:10.658029  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:10.843087  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:10.948535  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:11.159397  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:11.319712  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:11.447769  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:11.659756  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:11.819109  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:11.947822  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:12.159549  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:12.319206  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:12.446918  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:12.658927  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:12.824411  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:12.947802  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:13.159449  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:13.318706  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:13.454138  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:13.658608  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:13.819013  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:13.948036  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:14.159253  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:14.319616  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:14.449075  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:14.662100  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:14.824454  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:14.950365  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:15.161131  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:15.319196  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:15.447530  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:15.663409  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:15.820874  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:15.953095  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:16.165487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:16.319583  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:16.448606  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:16.659953  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:16.819503  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:16.975219  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:17.158372  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:17.318879  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:17.448192  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:17.658937  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:17.820351  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:17.947275  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:18.158790  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:18.319421  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:18.447567  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:18.659923  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:18.822375  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:18.947862  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:19.159020  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:19.319073  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:19.447850  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:19.659710  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:19.818515  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:19.947671  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:20.160392  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:20.318657  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:20.448137  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:20.660115  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:20.819099  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:20.951129  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:21.160373  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:21.325467  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:21.449746  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:21.659955  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:21.819131  994709 kapi.go:107] duration metric: took 1m37.503635731s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 20:21:21.948370  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:22.158762  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:22.447738  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:22.658570  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:22.949101  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:23.158220  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:23.451919  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:23.658790  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:23.948375  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:24.159201  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:24.449117  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:24.659750  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:24.948295  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:25.160000  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:25.448116  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:25.658136  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:25.948058  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:26.158569  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:26.447775  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:26.658964  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:26.948377  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:27.159144  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:27.448069  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:27.658935  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:27.955751  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:28.159540  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:28.448912  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:28.662299  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:28.947885  994709 kapi.go:107] duration metric: took 1m41.503580566s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 20:21:28.951140  994709 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-693704 cluster.
	I1002 20:21:28.954142  994709 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 20:21:28.956995  994709 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 20:21:29.159855  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:29.664073  994709 kapi.go:107] duration metric: took 1m45.009034533s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 20:21:31.039676  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:21:31.852592  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:21:31.852690  994709 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:21:31.856656  994709 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1002 20:21:31.859688  994709 addons.go:514] duration metric: took 1m53.876564642s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin nvidia-device-plugin storage-provisioner ingress-dns metrics-server registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1002 20:21:31.859739  994709 start.go:247] waiting for cluster config update ...
	I1002 20:21:31.859761  994709 start.go:256] writing updated cluster config ...
	I1002 20:21:31.860060  994709 ssh_runner.go:195] Run: rm -f paused
	I1002 20:21:31.863547  994709 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:21:31.867571  994709 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4kbq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.872068  994709 pod_ready.go:94] pod "coredns-66bc5c9577-4kbq4" is "Ready"
	I1002 20:21:31.872092  994709 pod_ready.go:86] duration metric: took 4.493776ms for pod "coredns-66bc5c9577-4kbq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.874237  994709 pod_ready.go:83] waiting for pod "etcd-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.878256  994709 pod_ready.go:94] pod "etcd-addons-693704" is "Ready"
	I1002 20:21:31.878280  994709 pod_ready.go:86] duration metric: took 4.022961ms for pod "etcd-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.880276  994709 pod_ready.go:83] waiting for pod "kube-apiserver-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.885189  994709 pod_ready.go:94] pod "kube-apiserver-addons-693704" is "Ready"
	I1002 20:21:31.885218  994709 pod_ready.go:86] duration metric: took 4.915919ms for pod "kube-apiserver-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.887484  994709 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:32.267515  994709 pod_ready.go:94] pod "kube-controller-manager-addons-693704" is "Ready"
	I1002 20:21:32.267553  994709 pod_ready.go:86] duration metric: took 380.043461ms for pod "kube-controller-manager-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:32.468152  994709 pod_ready.go:83] waiting for pod "kube-proxy-gdxqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:32.869233  994709 pod_ready.go:94] pod "kube-proxy-gdxqs" is "Ready"
	I1002 20:21:32.869266  994709 pod_ready.go:86] duration metric: took 401.082172ms for pod "kube-proxy-gdxqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:33.067662  994709 pod_ready.go:83] waiting for pod "kube-scheduler-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:33.469284  994709 pod_ready.go:94] pod "kube-scheduler-addons-693704" is "Ready"
	I1002 20:21:33.469361  994709 pod_ready.go:86] duration metric: took 401.671243ms for pod "kube-scheduler-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:33.469380  994709 pod_ready.go:40] duration metric: took 1.605801066s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:21:33.530905  994709 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 20:21:33.534526  994709 out.go:179] * Done! kubectl is now configured to use "addons-693704" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 20:26:06 addons-693704 crio[828]: time="2025-10-02T20:26:06.596707116Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=f347cbd0-5725-4b75-b561-1dea00653a84 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:06 addons-693704 crio[828]: time="2025-10-02T20:26:06.596757445Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=f347cbd0-5725-4b75-b561-1dea00653a84 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:19 addons-693704 crio[828]: time="2025-10-02T20:26:19.747966118Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=cb7733cb-e2d6-4b44-a42a-d7e173bcc508 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:19 addons-693704 crio[828]: time="2025-10-02T20:26:19.748159705Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=cb7733cb-e2d6-4b44-a42a-d7e173bcc508 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:19 addons-693704 crio[828]: time="2025-10-02T20:26:19.748208131Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=cb7733cb-e2d6-4b44-a42a-d7e173bcc508 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:36 addons-693704 crio[828]: time="2025-10-02T20:26:36.373758435Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 02 20:27:06 addons-693704 crio[828]: time="2025-10-02T20:27:06.643610753Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=80ac12bc-c4b9-49ab-9f30-9bfc5d720786 name=/runtime.v1.ImageService/PullImage
	Oct 02 20:27:06 addons-693704 crio[828]: time="2025-10-02T20:27:06.646541053Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Oct 02 20:27:29 addons-693704 crio[828]: time="2025-10-02T20:27:29.994974818Z" level=info msg="Running pod sandbox: default/nginx/POD" id=82be2d0d-81aa-4e18-b886-6bbcd86088c0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 20:27:29 addons-693704 crio[828]: time="2025-10-02T20:27:29.995047777Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:27:30 addons-693704 crio[828]: time="2025-10-02T20:27:30.010859002Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:159ab64d0ea8ec1d2e637b0ae8a8b6efdef4f0ddb59ea8f3daa1ae5e6125f37e UID:56f5bc51-854e-47f6-a9a2-ee03227a1b18 NetNS:/var/run/netns/f1c508a5-3250-45cc-aafa-dc9f32ecf1e5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001b22430}] Aliases:map[]}"
	Oct 02 20:27:30 addons-693704 crio[828]: time="2025-10-02T20:27:30.010907305Z" level=info msg="Adding pod default_nginx to CNI network \"kindnet\" (type=ptp)"
	Oct 02 20:27:30 addons-693704 crio[828]: time="2025-10-02T20:27:30.047042942Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:159ab64d0ea8ec1d2e637b0ae8a8b6efdef4f0ddb59ea8f3daa1ae5e6125f37e UID:56f5bc51-854e-47f6-a9a2-ee03227a1b18 NetNS:/var/run/netns/f1c508a5-3250-45cc-aafa-dc9f32ecf1e5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001b22430}] Aliases:map[]}"
	Oct 02 20:27:30 addons-693704 crio[828]: time="2025-10-02T20:27:30.047202389Z" level=info msg="Checking pod default_nginx for CNI network kindnet (type=ptp)"
	Oct 02 20:27:30 addons-693704 crio[828]: time="2025-10-02T20:27:30.051752128Z" level=info msg="Ran pod sandbox 159ab64d0ea8ec1d2e637b0ae8a8b6efdef4f0ddb59ea8f3daa1ae5e6125f37e with infra container: default/nginx/POD" id=82be2d0d-81aa-4e18-b886-6bbcd86088c0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 20:27:30 addons-693704 crio[828]: time="2025-10-02T20:27:30.055104925Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=39c5c5b3-3ce8-47a9-99f8-d62ac45121cb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:27:30 addons-693704 crio[828]: time="2025-10-02T20:27:30.055301687Z" level=info msg="Image docker.io/nginx:alpine not found" id=39c5c5b3-3ce8-47a9-99f8-d62ac45121cb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:27:30 addons-693704 crio[828]: time="2025-10-02T20:27:30.055342753Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=39c5c5b3-3ce8-47a9-99f8-d62ac45121cb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:27:36 addons-693704 crio[828]: time="2025-10-02T20:27:36.959102866Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Oct 02 20:27:39 addons-693704 crio[828]: time="2025-10-02T20:27:39.174266999Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=e06782ea-c0f2-476e-be34-a91726fe4dee name=/runtime.v1.ImageService/PullImage
	Oct 02 20:27:39 addons-693704 crio[828]: time="2025-10-02T20:27:39.175773547Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 02 20:27:39 addons-693704 crio[828]: time="2025-10-02T20:27:39.919741478Z" level=info msg="Stopping pod sandbox: 132d4bf582483728fbe1da9de28b37b53ca80083eb7b6a9a6059aebe5856633a" id=aa80b855-600c-48c7-a5bd-3799e90b8c68 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 20:27:39 addons-693704 crio[828]: time="2025-10-02T20:27:39.920044486Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84 Namespace:local-path-storage ID:132d4bf582483728fbe1da9de28b37b53ca80083eb7b6a9a6059aebe5856633a UID:bf5e0fa0-b505-42e6-98e4-bbed23229c11 NetNS:/var/run/netns/f52e2632-63f0-4221-b5df-87894cfaabf6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001b22fe0}] Aliases:map[]}"
	Oct 02 20:27:39 addons-693704 crio[828]: time="2025-10-02T20:27:39.920195834Z" level=info msg="Deleting pod local-path-storage_helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84 from CNI network \"kindnet\" (type=ptp)"
	Oct 02 20:27:39 addons-693704 crio[828]: time="2025-10-02T20:27:39.95122264Z" level=info msg="Stopped pod sandbox: 132d4bf582483728fbe1da9de28b37b53ca80083eb7b6a9a6059aebe5856633a" id=aa80b855-600c-48c7-a5bd-3799e90b8c68 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	0bc9f0d1b235e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          6 minutes ago       Running             busybox                                  0                   a4b1fc9c97e53       busybox                                    default
	6928dd54cd320       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          6 minutes ago       Running             csi-snapshotter                          0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	40761b95b2196       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 6 minutes ago       Running             gcp-auth                                 0                   9c1545073abea       gcp-auth-78565c9fb4-27djq                  gcp-auth
	8860f0e019516       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          6 minutes ago       Running             csi-provisioner                          0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	36c49020464e2       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            6 minutes ago       Running             liveness-probe                           0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	b7161126faae3       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           6 minutes ago       Running             hostpath                                 0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	b2b0003c8ca36       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             6 minutes ago       Running             controller                               0                   3a08c5d217c56       ingress-nginx-controller-9cc49f96f-9frwt   ingress-nginx
	2852575f20001       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:74b72c3673aff7e1fa7c3ebae80b5dbe5446ce1906ef8d4f98d4b9f6e72c88e1                            6 minutes ago       Running             gadget                                   0                   34878d06228a7       gadget-gljs2                               gadget
	ee97eb0b32c7f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                6 minutes ago       Running             node-driver-registrar                    0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	e42d2c0b7778e       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             6 minutes ago       Running             local-path-provisioner                   0                   b4f667a1ce299       local-path-provisioner-648f6765c9-v6khh    local-path-storage
	fc0714b2fd72f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              6 minutes ago       Running             registry-proxy                           0                   c8535afb414d5       registry-proxy-2kw45                       kube-system
	bca1297af7427       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   6 minutes ago       Exited              patch                                    0                   e925887ddf0d9       ingress-nginx-admission-patch-v6xpn        ingress-nginx
	627ce890f2b48       gcr.io/cloud-spanner-emulator/emulator@sha256:77d0cd8103fe32875bbb04c070a7d1db292093b65d11c99c00cf39e8a13852f5                               6 minutes ago       Running             cloud-spanner-emulator                   0                   49dda3c4634a4       cloud-spanner-emulator-85f6b7fc65-5wsmw    default
	16f4af5cddb75       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           7 minutes ago       Running             registry                                 0                   4bae41325f3f5       registry-66898fdd98-8rftt                  kube-system
	91fa943497ee5       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        7 minutes ago       Running             metrics-server                           0                   27cb63141e106       metrics-server-85b7d694d7-8pl6l            kube-system
	439510daf689e       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               7 minutes ago       Running             minikube-ingress-dns                     0                   e547aac4b280e       kube-ingress-dns-minikube                  kube-system
	063fa56393267       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              7 minutes ago       Running             csi-resizer                              0                   20ac69c0a7e28       csi-hostpath-resizer-0                     kube-system
	948a7498f368d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   7 minutes ago       Running             csi-external-health-monitor-controller   0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	bbd0c0fdbe948       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             7 minutes ago       Running             csi-attacher                             0                   e6f6a7809eb96       csi-hostpath-attacher-0                    kube-system
	697e9a6f92fb8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   7 minutes ago       Exited              create                                   0                   ec9abb5f653b7       ingress-nginx-admission-create-fndzf       ingress-nginx
	4a5b5d50e1426       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     7 minutes ago       Running             nvidia-device-plugin-ctr                 0                   ae9275c193e86       nvidia-device-plugin-daemonset-jblz6       kube-system
	4757a91ace2d4       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      7 minutes ago       Running             volume-snapshot-controller               0                   7cb6188e8093e       snapshot-controller-7d9fbc56b8-49h86       kube-system
	88520ea2c4ca7       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      7 minutes ago       Running             volume-snapshot-controller               0                   4de0d58fcc8d5       snapshot-controller-7d9fbc56b8-bw7rc       kube-system
	9390fd50f454e       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              7 minutes ago       Running             yakd                                     0                   a77b4648943e2       yakd-dashboard-5ff678cb9-b48gd             yakd-dashboard
	ec242b99be750       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             7 minutes ago       Running             coredns                                  0                   5e1993cbe5e41       coredns-66bc5c9577-4kbq4                   kube-system
	165a582582a89       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             7 minutes ago       Running             storage-provisioner                      0                   8b4b5f8349762       storage-provisioner                        kube-system
	cde8e7a8a028e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             8 minutes ago       Running             kindnet-cni                              0                   b1a33925c911a       kindnet-p9zvn                              kube-system
	0703880dcf265       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             8 minutes ago       Running             kube-proxy                               0                   18175bde14b29       kube-proxy-gdxqs                           kube-system
	972d6e9616c37       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             8 minutes ago       Running             etcd                                     0                   789f38c5890c2       etcd-addons-693704                         kube-system
	020148eb47c8c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             8 minutes ago       Running             kube-scheduler                           0                   3aa090880fcae       kube-scheduler-addons-693704               kube-system
	ab99c3bb8f644       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             8 minutes ago       Running             kube-controller-manager                  0                   629d2cf069469       kube-controller-manager-addons-693704      kube-system
	71c9ea9528918       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             8 minutes ago       Running             kube-apiserver                           0                   de4f0abfefce3       kube-apiserver-addons-693704               kube-system
	
	
	==> coredns [ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b] <==
	[INFO] 10.244.0.17:55859 - 34053 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.006721575s
	[INFO] 10.244.0.17:55859 - 46822 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000305001s
	[INFO] 10.244.0.17:55859 - 21325 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000282717s
	[INFO] 10.244.0.17:37045 - 20421 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000162088s
	[INFO] 10.244.0.17:37045 - 20651 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000128325s
	[INFO] 10.244.0.17:51048 - 61194 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000092519s
	[INFO] 10.244.0.17:51048 - 61672 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085027s
	[INFO] 10.244.0.17:57091 - 44872 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088334s
	[INFO] 10.244.0.17:57091 - 44684 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105589s
	[INFO] 10.244.0.17:59527 - 40959 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003459669s
	[INFO] 10.244.0.17:59527 - 41156 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003770241s
	[INFO] 10.244.0.17:59136 - 21305 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000142257s
	[INFO] 10.244.0.17:59136 - 21125 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093717s
	[INFO] 10.244.0.21:41484 - 12317 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000192315s
	[INFO] 10.244.0.21:60775 - 50484 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000142913s
	[INFO] 10.244.0.21:49862 - 44888 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127521s
	[INFO] 10.244.0.21:54840 - 52239 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149642s
	[INFO] 10.244.0.21:42560 - 6869 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000156624s
	[INFO] 10.244.0.21:41861 - 43315 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000298545s
	[INFO] 10.244.0.21:38412 - 8398 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002294645s
	[INFO] 10.244.0.21:40087 - 34579 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002408201s
	[INFO] 10.244.0.21:50163 - 3512 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.006805026s
	[INFO] 10.244.0.21:42501 - 46640 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.006618816s
	[INFO] 10.244.0.23:46061 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000191659s
	[INFO] 10.244.0.23:58330 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000122318s
	
	
	==> describe nodes <==
	Name:               addons-693704
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-693704
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=addons-693704
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_19_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-693704
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-693704"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:19:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-693704
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:27:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:27:21 +0000   Thu, 02 Oct 2025 20:19:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:27:21 +0000   Thu, 02 Oct 2025 20:19:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:27:21 +0000   Thu, 02 Oct 2025 20:19:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:27:21 +0000   Thu, 02 Oct 2025 20:20:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-693704
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 db645666b7ad4f1695da9df78e9fa367
	  System UUID:                021278b1-6d13-4d8b-91c7-a5de147567f7
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  default                     cloud-spanner-emulator-85f6b7fc65-5wsmw     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gadget                      gadget-gljs2                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  gcp-auth                    gcp-auth-78565c9fb4-27djq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m14s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-9frwt    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         8m17s
	  kube-system                 coredns-66bc5c9577-4kbq4                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m23s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 csi-hostpathplugin-kkptd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 etcd-addons-693704                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m30s
	  kube-system                 kindnet-p9zvn                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m24s
	  kube-system                 kube-apiserver-addons-693704                250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-controller-manager-addons-693704       200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 kube-proxy-gdxqs                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 kube-scheduler-addons-693704                100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 metrics-server-85b7d694d7-8pl6l             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         8m18s
	  kube-system                 nvidia-device-plugin-daemonset-jblz6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 registry-66898fdd98-8rftt                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 registry-creds-764b6fb674-6cg6b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 registry-proxy-2kw45                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 snapshot-controller-7d9fbc56b8-49h86        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 snapshot-controller-7d9fbc56b8-bw7rc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  local-path-storage          local-path-provisioner-648f6765c9-v6khh     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-b48gd              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     8m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m22s                  kube-proxy       
	  Normal   Starting                 8m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m35s (x8 over 8m35s)  kubelet          Node addons-693704 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m35s (x8 over 8m35s)  kubelet          Node addons-693704 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m35s (x8 over 8m35s)  kubelet          Node addons-693704 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m29s                  kubelet          Node addons-693704 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m29s                  kubelet          Node addons-693704 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m29s                  kubelet          Node addons-693704 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m25s                  node-controller  Node addons-693704 event: Registered Node addons-693704 in Controller
	  Normal   NodeReady                7m42s                  kubelet          Node addons-693704 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 19:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 20:17] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 20:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3] <==
	{"level":"warn","ts":"2025-10-02T20:19:28.781544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.806892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.814167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.836647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.852657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.878105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.886646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.904572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.925806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.935913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.956578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.971517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.993677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.031509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.041915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.068902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.157895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:45.092047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:45.118929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:06.895880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:06.909631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:07.000732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:07.017116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49460","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:20:36.364046Z","caller":"traceutil/trace.go:172","msg":"trace[1063042819] transaction","detail":"{read_only:false; response_revision:1036; number_of_response:1; }","duration":"113.56953ms","start":"2025-10-02T20:20:36.250465Z","end":"2025-10-02T20:20:36.364035Z","steps":["trace[1063042819] 'process raft request'  (duration: 56.881349ms)","trace[1063042819] 'compare'  (duration: 56.419938ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T20:20:36.365279Z","caller":"traceutil/trace.go:172","msg":"trace[29069078] transaction","detail":"{read_only:false; response_revision:1037; number_of_response:1; }","duration":"104.71736ms","start":"2025-10-02T20:20:36.259205Z","end":"2025-10-02T20:20:36.363922Z","steps":["trace[29069078] 'process raft request'  (duration: 104.653649ms)"],"step_count":1}
	
	
	==> gcp-auth [40761b95b219669fa13be3f37e9874311bcd42514e92101fcec6f883bf46c837] <==
	2025/10/02 20:21:27 GCP Auth Webhook started!
	2025/10/02 20:21:34 Ready to marshal response ...
	2025/10/02 20:21:34 Ready to write response ...
	2025/10/02 20:21:34 Ready to marshal response ...
	2025/10/02 20:21:34 Ready to write response ...
	2025/10/02 20:21:34 Ready to marshal response ...
	2025/10/02 20:21:34 Ready to write response ...
	2025/10/02 20:21:55 Ready to marshal response ...
	2025/10/02 20:21:55 Ready to write response ...
	2025/10/02 20:21:59 Ready to marshal response ...
	2025/10/02 20:21:59 Ready to write response ...
	2025/10/02 20:22:06 Ready to marshal response ...
	2025/10/02 20:22:06 Ready to write response ...
	2025/10/02 20:22:06 Ready to marshal response ...
	2025/10/02 20:22:06 Ready to write response ...
	2025/10/02 20:24:21 Ready to marshal response ...
	2025/10/02 20:24:21 Ready to write response ...
	2025/10/02 20:26:52 Ready to marshal response ...
	2025/10/02 20:26:52 Ready to write response ...
	2025/10/02 20:27:29 Ready to marshal response ...
	2025/10/02 20:27:29 Ready to write response ...
	
	
	==> kernel <==
	 20:28:01 up  5:10,  0 user,  load average: 1.69, 1.61, 2.55
	Linux addons-693704 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0] <==
	I1002 20:25:58.910161       1 main.go:301] handling current node
	I1002 20:26:08.907658       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:08.907789       1 main.go:301] handling current node
	I1002 20:26:18.914211       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:18.914253       1 main.go:301] handling current node
	I1002 20:26:28.911384       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:28.911494       1 main.go:301] handling current node
	I1002 20:26:38.914130       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:38.914165       1 main.go:301] handling current node
	I1002 20:26:48.907640       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:48.907681       1 main.go:301] handling current node
	I1002 20:26:58.908673       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:58.908721       1 main.go:301] handling current node
	I1002 20:27:08.914122       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:27:08.914154       1 main.go:301] handling current node
	I1002 20:27:18.911342       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:27:18.911383       1 main.go:301] handling current node
	I1002 20:27:28.913492       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:27:28.913529       1 main.go:301] handling current node
	I1002 20:27:38.907626       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:27:38.907662       1 main.go:301] handling current node
	I1002 20:27:48.907634       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:27:48.907666       1 main.go:301] handling current node
	I1002 20:27:58.909301       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:27:58.909424       1 main.go:301] handling current node
	
	
	==> kube-apiserver [71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba] <==
	E1002 20:21:08.431257       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1002 20:21:08.431339       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.13.34:443: connect: connection refused" logger="UnhandledError"
	E1002 20:21:08.433865       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.13.34:443: connect: connection refused" logger="UnhandledError"
	W1002 20:21:09.431415       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:21:09.431457       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 20:21:09.431472       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 20:21:09.431507       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:21:09.431564       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 20:21:09.432661       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 20:21:13.450452       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:21:13.450503       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1002 20:21:13.450794       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1002 20:21:13.499856       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1002 20:21:44.668705       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43290: use of closed network connection
	I1002 20:27:29.421426       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 20:27:29.744566       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.50.219"}
	
	
	==> kube-controller-manager [ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c] <==
	I1002 20:19:36.927821       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 20:19:36.927907       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-693704"
	I1002 20:19:36.927948       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 20:19:36.927971       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 20:19:36.929043       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 20:19:36.929089       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 20:19:36.929104       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 20:19:36.929196       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 20:19:36.929242       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 20:19:36.930939       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 20:19:36.953633       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 20:19:36.957922       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 20:19:42.958900       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 20:20:06.887630       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 20:20:06.887888       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1002 20:20:06.887954       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 20:20:06.966287       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 20:20:06.978573       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 20:20:06.989795       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:20:07.080038       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:20:21.939957       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1002 20:20:36.994429       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 20:20:37.091221       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1002 20:21:07.000284       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 20:21:07.098427       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1] <==
	I1002 20:19:38.989384       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:19:39.087738       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:19:39.188580       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:19:39.188619       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:19:39.188702       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:19:39.263259       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:19:39.267990       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:19:39.278942       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:19:39.279269       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:19:39.279289       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:19:39.289355       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:19:39.289374       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:19:39.289655       1 config.go:200] "Starting service config controller"
	I1002 20:19:39.289662       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:19:39.289995       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:19:39.290002       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:19:39.290636       1 config.go:309] "Starting node config controller"
	I1002 20:19:39.290645       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:19:39.290651       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:19:39.390091       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 20:19:39.390138       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:19:39.390179       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251] <==
	E1002 20:19:30.082976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 20:19:30.083025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 20:19:30.083075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:19:30.083123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 20:19:30.083172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 20:19:30.083221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 20:19:30.083269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:19:30.083318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:19:30.083367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:19:30.083415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:19:30.083460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:19:30.083513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 20:19:30.083555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 20:19:30.083651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 20:19:30.083692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 20:19:30.083739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:19:30.086243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 20:19:30.905348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 20:19:30.932288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:19:30.964617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 20:19:30.984039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 20:19:31.017892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:19:31.036527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:19:31.063255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1002 20:19:31.603691       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 20:27:06 addons-693704 kubelet[1282]: E1002 20:27:06.642025    1282 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(0a97c0d4-0277-4225-81aa-39349ced9b52): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:27:06 addons-693704 kubelet[1282]: E1002 20:27:06.642096    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0a97c0d4-0277-4225-81aa-39349ced9b52"
	Oct 02 20:27:09 addons-693704 kubelet[1282]: I1002 20:27:09.746645    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jblz6" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:27:21 addons-693704 kubelet[1282]: E1002 20:27:21.747269    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0a97c0d4-0277-4225-81aa-39349ced9b52"
	Oct 02 20:27:29 addons-693704 kubelet[1282]: I1002 20:27:29.763020    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqkzj\" (UniqueName: \"kubernetes.io/projected/56f5bc51-854e-47f6-a9a2-ee03227a1b18-kube-api-access-rqkzj\") pod \"nginx\" (UID: \"56f5bc51-854e-47f6-a9a2-ee03227a1b18\") " pod="default/nginx"
	Oct 02 20:27:29 addons-693704 kubelet[1282]: I1002 20:27:29.763088    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/56f5bc51-854e-47f6-a9a2-ee03227a1b18-gcp-creds\") pod \"nginx\" (UID: \"56f5bc51-854e-47f6-a9a2-ee03227a1b18\") " pod="default/nginx"
	Oct 02 20:27:34 addons-693704 kubelet[1282]: E1002 20:27:34.747124    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0a97c0d4-0277-4225-81aa-39349ced9b52"
	Oct 02 20:27:39 addons-693704 kubelet[1282]: E1002 20:27:39.173356    1282 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:27:39 addons-693704 kubelet[1282]: E1002 20:27:39.173414    1282 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:27:39 addons-693704 kubelet[1282]: E1002 20:27:39.173572    1282 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84_local-path-storage(bf5e0fa0-b505-42e6-98e4-bbed23229c11): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image" logger="UnhandledError"
	Oct 02 20:27:39 addons-693704 kubelet[1282]: E1002 20:27:39.173614    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="local-path-storage/helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84" podUID="bf5e0fa0-b505-42e6-98e4-bbed23229c11"
	Oct 02 20:27:39 addons-693704 kubelet[1282]: I1002 20:27:39.747729    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-8rftt" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:27:40 addons-693704 kubelet[1282]: I1002 20:27:40.052931    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bf5e0fa0-b505-42e6-98e4-bbed23229c11-gcp-creds\") pod \"bf5e0fa0-b505-42e6-98e4-bbed23229c11\" (UID: \"bf5e0fa0-b505-42e6-98e4-bbed23229c11\") "
	Oct 02 20:27:40 addons-693704 kubelet[1282]: I1002 20:27:40.052997    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/bf5e0fa0-b505-42e6-98e4-bbed23229c11-data\") pod \"bf5e0fa0-b505-42e6-98e4-bbed23229c11\" (UID: \"bf5e0fa0-b505-42e6-98e4-bbed23229c11\") "
	Oct 02 20:27:40 addons-693704 kubelet[1282]: I1002 20:27:40.053048    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4vl8\" (UniqueName: \"kubernetes.io/projected/bf5e0fa0-b505-42e6-98e4-bbed23229c11-kube-api-access-k4vl8\") pod \"bf5e0fa0-b505-42e6-98e4-bbed23229c11\" (UID: \"bf5e0fa0-b505-42e6-98e4-bbed23229c11\") "
	Oct 02 20:27:40 addons-693704 kubelet[1282]: I1002 20:27:40.053072    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/bf5e0fa0-b505-42e6-98e4-bbed23229c11-script\") pod \"bf5e0fa0-b505-42e6-98e4-bbed23229c11\" (UID: \"bf5e0fa0-b505-42e6-98e4-bbed23229c11\") "
	Oct 02 20:27:40 addons-693704 kubelet[1282]: I1002 20:27:40.053578    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf5e0fa0-b505-42e6-98e4-bbed23229c11-script" (OuterVolumeSpecName: "script") pod "bf5e0fa0-b505-42e6-98e4-bbed23229c11" (UID: "bf5e0fa0-b505-42e6-98e4-bbed23229c11"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 02 20:27:40 addons-693704 kubelet[1282]: I1002 20:27:40.053631    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf5e0fa0-b505-42e6-98e4-bbed23229c11-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "bf5e0fa0-b505-42e6-98e4-bbed23229c11" (UID: "bf5e0fa0-b505-42e6-98e4-bbed23229c11"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 02 20:27:40 addons-693704 kubelet[1282]: I1002 20:27:40.053665    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf5e0fa0-b505-42e6-98e4-bbed23229c11-data" (OuterVolumeSpecName: "data") pod "bf5e0fa0-b505-42e6-98e4-bbed23229c11" (UID: "bf5e0fa0-b505-42e6-98e4-bbed23229c11"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 02 20:27:40 addons-693704 kubelet[1282]: I1002 20:27:40.059946    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf5e0fa0-b505-42e6-98e4-bbed23229c11-kube-api-access-k4vl8" (OuterVolumeSpecName: "kube-api-access-k4vl8") pod "bf5e0fa0-b505-42e6-98e4-bbed23229c11" (UID: "bf5e0fa0-b505-42e6-98e4-bbed23229c11"). InnerVolumeSpecName "kube-api-access-k4vl8". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 02 20:27:40 addons-693704 kubelet[1282]: I1002 20:27:40.153624    1282 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bf5e0fa0-b505-42e6-98e4-bbed23229c11-gcp-creds\") on node \"addons-693704\" DevicePath \"\""
	Oct 02 20:27:40 addons-693704 kubelet[1282]: I1002 20:27:40.153666    1282 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/bf5e0fa0-b505-42e6-98e4-bbed23229c11-data\") on node \"addons-693704\" DevicePath \"\""
	Oct 02 20:27:40 addons-693704 kubelet[1282]: I1002 20:27:40.153683    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k4vl8\" (UniqueName: \"kubernetes.io/projected/bf5e0fa0-b505-42e6-98e4-bbed23229c11-kube-api-access-k4vl8\") on node \"addons-693704\" DevicePath \"\""
	Oct 02 20:27:40 addons-693704 kubelet[1282]: I1002 20:27:40.153695    1282 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/bf5e0fa0-b505-42e6-98e4-bbed23229c11-script\") on node \"addons-693704\" DevicePath \"\""
	Oct 02 20:27:42 addons-693704 kubelet[1282]: I1002 20:27:42.750789    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf5e0fa0-b505-42e6-98e4-bbed23229c11" path="/var/lib/kubelet/pods/bf5e0fa0-b505-42e6-98e4-bbed23229c11/volumes"
	
	
	==> storage-provisioner [165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa] <==
	W1002 20:27:36.856973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:38.860247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:38.865035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:40.868375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:40.872762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:42.876691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:42.883949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:44.887412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:44.892622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:46.895665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:46.900091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:48.902822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:48.909678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:50.912668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:50.917308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:52.921338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:52.926598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:54.929479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:54.934380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:56.937591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:56.942232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:58.945805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:58.952622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:28:00.957355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:28:00.969372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-693704 -n addons-693704
helpers_test.go:269: (dbg) Run:  kubectl --context addons-693704 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn registry-creds-764b6fb674-6cg6b
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-693704 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn registry-creds-764b6fb674-6cg6b
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-693704 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn registry-creds-764b6fb674-6cg6b: exit status 1 (110.103938ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-693704/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:27:29 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rqkzj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rqkzj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  33s   default-scheduler  Successfully assigned default/nginx to addons-693704
	  Normal  Pulling    32s   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-693704/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:21:59 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-78xtg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-78xtg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-693704
	  Warning  Failed     5m                   kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     56s (x3 over 5m)     kubelet            Error: ErrImagePull
	  Warning  Failed     56s (x2 over 2m59s)  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    28s (x4 over 5m)     kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     28s (x4 over 5m)     kubelet            Error: ImagePullBackOff
	  Normal   Pulling    14s (x4 over 6m3s)   kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t66j5 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-t66j5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fndzf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-v6xpn" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-6cg6b" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-693704 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn registry-creds-764b6fb674-6cg6b: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-693704 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (316.133805ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:28:02.780507 1005630 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:28:02.781638 1005630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:28:02.781681 1005630 out.go:374] Setting ErrFile to fd 2...
	I1002 20:28:02.781703 1005630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:28:02.782011 1005630 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:28:02.782440 1005630 mustload.go:65] Loading cluster: addons-693704
	I1002 20:28:02.782893 1005630 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:28:02.782931 1005630 addons.go:606] checking whether the cluster is paused
	I1002 20:28:02.783079 1005630 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:28:02.783119 1005630 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:28:02.783635 1005630 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:28:02.800431 1005630 ssh_runner.go:195] Run: systemctl --version
	I1002 20:28:02.800481 1005630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:28:02.821747 1005630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:28:02.920200 1005630 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:28:02.920288 1005630 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:28:02.990448 1005630 cri.go:89] found id: "6928dd54cd320350a0574bce9e99e262d52ef2ad32927799eb1e85f4f8e93f52"
	I1002 20:28:02.990470 1005630 cri.go:89] found id: "8860f0e019516ca00a020669c8250eb699147392f2fd0ab332eef75f31694ae5"
	I1002 20:28:02.990491 1005630 cri.go:89] found id: "36c49020464e244780926156984b457b4f46db8eacc0cd8fc4a9c835da280358"
	I1002 20:28:02.990511 1005630 cri.go:89] found id: "b7161126faae3543c45089c1c2a743e531e0d2ec905126b94b00abc8b3f4e86b"
	I1002 20:28:02.990520 1005630 cri.go:89] found id: "ee97eb0b32c7fe49fe1575e1d85aaabcb7c2a4c12bc8d6e96ec8ec1ace15eb83"
	I1002 20:28:02.990524 1005630 cri.go:89] found id: "fc0714b2fd72f680a79055bd40db196753ebd675e00b57fdbbb1311e6f52f0c6"
	I1002 20:28:02.990531 1005630 cri.go:89] found id: "16f4af5cddb751e7f70cd54a1b075a0e52070ec2ab609d7dd4b6382625d77f3c"
	I1002 20:28:02.990534 1005630 cri.go:89] found id: "91fa943497ee5a780fabd8bb6f6a4c48256ff45ea9d2abbfd6fe7173c099a981"
	I1002 20:28:02.990538 1005630 cri.go:89] found id: "439510daf689e871d24092dcfc8fc87def85821aa672fe72a12cd5370bc3106f"
	I1002 20:28:02.990550 1005630 cri.go:89] found id: "063fa56393267278e05822086a20d260f8a6eab5ccf9daa3feabae6e691670f2"
	I1002 20:28:02.990554 1005630 cri.go:89] found id: "948a7498f368d7c9cddb6833384a3cb174bb78753aa09668ec42b4d2860ebf4e"
	I1002 20:28:02.990557 1005630 cri.go:89] found id: "bbd0c0fdbe9489f01321a2881ee0a43621fde9603022a8e6e320b98adde72078"
	I1002 20:28:02.990561 1005630 cri.go:89] found id: "4a5b5d50e1426c0038720273cc7b8abbf6622876cd1ba10fda5cf36d7b50e6ab"
	I1002 20:28:02.990564 1005630 cri.go:89] found id: "4757a91ace2d4f7cb4fd077cfa0e537a6b0a1a55cfca1ddd7afa9cdee5de2849"
	I1002 20:28:02.990567 1005630 cri.go:89] found id: "88520ea2c4ca73535bd06329fc29000af61986adf6366f19c9a796b83180d153"
	I1002 20:28:02.990573 1005630 cri.go:89] found id: "ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b"
	I1002 20:28:02.990583 1005630 cri.go:89] found id: "165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa"
	I1002 20:28:02.990591 1005630 cri.go:89] found id: "cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0"
	I1002 20:28:02.990594 1005630 cri.go:89] found id: "0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1"
	I1002 20:28:02.990597 1005630 cri.go:89] found id: "972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3"
	I1002 20:28:02.990602 1005630 cri.go:89] found id: "020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251"
	I1002 20:28:02.990605 1005630 cri.go:89] found id: "ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c"
	I1002 20:28:02.990608 1005630 cri.go:89] found id: "71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba"
	I1002 20:28:02.990610 1005630 cri.go:89] found id: ""
	I1002 20:28:02.990693 1005630 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 20:28:03.015345 1005630 out.go:203] 
	W1002 20:28:03.018203 1005630 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:28:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:28:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 20:28:03.018236 1005630 out.go:285] * 
	* 
	W1002 20:28:03.025937 1005630 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:28:03.028820 1005630 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-693704 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-693704 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (274.085117ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:28:03.109515 1005681 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:28:03.110761 1005681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:28:03.110808 1005681 out.go:374] Setting ErrFile to fd 2...
	I1002 20:28:03.110829 1005681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:28:03.111121 1005681 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:28:03.111449 1005681 mustload.go:65] Loading cluster: addons-693704
	I1002 20:28:03.111890 1005681 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:28:03.111924 1005681 addons.go:606] checking whether the cluster is paused
	I1002 20:28:03.112049 1005681 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:28:03.112081 1005681 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:28:03.112612 1005681 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:28:03.130437 1005681 ssh_runner.go:195] Run: systemctl --version
	I1002 20:28:03.130511 1005681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:28:03.148742 1005681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:28:03.244363 1005681 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:28:03.244444 1005681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:28:03.276501 1005681 cri.go:89] found id: "6928dd54cd320350a0574bce9e99e262d52ef2ad32927799eb1e85f4f8e93f52"
	I1002 20:28:03.276525 1005681 cri.go:89] found id: "8860f0e019516ca00a020669c8250eb699147392f2fd0ab332eef75f31694ae5"
	I1002 20:28:03.276536 1005681 cri.go:89] found id: "36c49020464e244780926156984b457b4f46db8eacc0cd8fc4a9c835da280358"
	I1002 20:28:03.276541 1005681 cri.go:89] found id: "b7161126faae3543c45089c1c2a743e531e0d2ec905126b94b00abc8b3f4e86b"
	I1002 20:28:03.276544 1005681 cri.go:89] found id: "ee97eb0b32c7fe49fe1575e1d85aaabcb7c2a4c12bc8d6e96ec8ec1ace15eb83"
	I1002 20:28:03.276548 1005681 cri.go:89] found id: "fc0714b2fd72f680a79055bd40db196753ebd675e00b57fdbbb1311e6f52f0c6"
	I1002 20:28:03.276552 1005681 cri.go:89] found id: "16f4af5cddb751e7f70cd54a1b075a0e52070ec2ab609d7dd4b6382625d77f3c"
	I1002 20:28:03.276555 1005681 cri.go:89] found id: "91fa943497ee5a780fabd8bb6f6a4c48256ff45ea9d2abbfd6fe7173c099a981"
	I1002 20:28:03.276558 1005681 cri.go:89] found id: "439510daf689e871d24092dcfc8fc87def85821aa672fe72a12cd5370bc3106f"
	I1002 20:28:03.276565 1005681 cri.go:89] found id: "063fa56393267278e05822086a20d260f8a6eab5ccf9daa3feabae6e691670f2"
	I1002 20:28:03.276568 1005681 cri.go:89] found id: "948a7498f368d7c9cddb6833384a3cb174bb78753aa09668ec42b4d2860ebf4e"
	I1002 20:28:03.276571 1005681 cri.go:89] found id: "bbd0c0fdbe9489f01321a2881ee0a43621fde9603022a8e6e320b98adde72078"
	I1002 20:28:03.276575 1005681 cri.go:89] found id: "4a5b5d50e1426c0038720273cc7b8abbf6622876cd1ba10fda5cf36d7b50e6ab"
	I1002 20:28:03.276578 1005681 cri.go:89] found id: "4757a91ace2d4f7cb4fd077cfa0e537a6b0a1a55cfca1ddd7afa9cdee5de2849"
	I1002 20:28:03.276581 1005681 cri.go:89] found id: "88520ea2c4ca73535bd06329fc29000af61986adf6366f19c9a796b83180d153"
	I1002 20:28:03.276586 1005681 cri.go:89] found id: "ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b"
	I1002 20:28:03.276593 1005681 cri.go:89] found id: "165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa"
	I1002 20:28:03.276597 1005681 cri.go:89] found id: "cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0"
	I1002 20:28:03.276600 1005681 cri.go:89] found id: "0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1"
	I1002 20:28:03.276603 1005681 cri.go:89] found id: "972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3"
	I1002 20:28:03.276607 1005681 cri.go:89] found id: "020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251"
	I1002 20:28:03.276611 1005681 cri.go:89] found id: "ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c"
	I1002 20:28:03.276614 1005681 cri.go:89] found id: "71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba"
	I1002 20:28:03.276617 1005681 cri.go:89] found id: ""
	I1002 20:28:03.276670 1005681 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 20:28:03.291593 1005681 out.go:203] 
	W1002 20:28:03.294366 1005681 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:28:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:28:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 20:28:03.294391 1005681 out.go:285] * 
	* 
	W1002 20:28:03.302018 1005681 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:28:03.304914 1005681 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-693704 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (371.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-693704 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-693704 --alsologtostderr -v=1: exit status 11 (260.091593ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:27:14.492715 1004193 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:27:14.493558 1004193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:14.493575 1004193 out.go:374] Setting ErrFile to fd 2...
	I1002 20:27:14.493581 1004193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:14.493931 1004193 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:27:14.494292 1004193 mustload.go:65] Loading cluster: addons-693704
	I1002 20:27:14.494660 1004193 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:27:14.494678 1004193 addons.go:606] checking whether the cluster is paused
	I1002 20:27:14.494777 1004193 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:27:14.494796 1004193 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:27:14.495318 1004193 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:27:14.515552 1004193 ssh_runner.go:195] Run: systemctl --version
	I1002 20:27:14.515611 1004193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:27:14.535074 1004193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:27:14.636708 1004193 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:27:14.636811 1004193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:27:14.668106 1004193 cri.go:89] found id: "6928dd54cd320350a0574bce9e99e262d52ef2ad32927799eb1e85f4f8e93f52"
	I1002 20:27:14.668128 1004193 cri.go:89] found id: "8860f0e019516ca00a020669c8250eb699147392f2fd0ab332eef75f31694ae5"
	I1002 20:27:14.668133 1004193 cri.go:89] found id: "36c49020464e244780926156984b457b4f46db8eacc0cd8fc4a9c835da280358"
	I1002 20:27:14.668137 1004193 cri.go:89] found id: "b7161126faae3543c45089c1c2a743e531e0d2ec905126b94b00abc8b3f4e86b"
	I1002 20:27:14.668141 1004193 cri.go:89] found id: "ee97eb0b32c7fe49fe1575e1d85aaabcb7c2a4c12bc8d6e96ec8ec1ace15eb83"
	I1002 20:27:14.668144 1004193 cri.go:89] found id: "fc0714b2fd72f680a79055bd40db196753ebd675e00b57fdbbb1311e6f52f0c6"
	I1002 20:27:14.668147 1004193 cri.go:89] found id: "16f4af5cddb751e7f70cd54a1b075a0e52070ec2ab609d7dd4b6382625d77f3c"
	I1002 20:27:14.668150 1004193 cri.go:89] found id: "91fa943497ee5a780fabd8bb6f6a4c48256ff45ea9d2abbfd6fe7173c099a981"
	I1002 20:27:14.668154 1004193 cri.go:89] found id: "439510daf689e871d24092dcfc8fc87def85821aa672fe72a12cd5370bc3106f"
	I1002 20:27:14.668162 1004193 cri.go:89] found id: "063fa56393267278e05822086a20d260f8a6eab5ccf9daa3feabae6e691670f2"
	I1002 20:27:14.668166 1004193 cri.go:89] found id: "948a7498f368d7c9cddb6833384a3cb174bb78753aa09668ec42b4d2860ebf4e"
	I1002 20:27:14.668169 1004193 cri.go:89] found id: "bbd0c0fdbe9489f01321a2881ee0a43621fde9603022a8e6e320b98adde72078"
	I1002 20:27:14.668173 1004193 cri.go:89] found id: "4a5b5d50e1426c0038720273cc7b8abbf6622876cd1ba10fda5cf36d7b50e6ab"
	I1002 20:27:14.668176 1004193 cri.go:89] found id: "4757a91ace2d4f7cb4fd077cfa0e537a6b0a1a55cfca1ddd7afa9cdee5de2849"
	I1002 20:27:14.668180 1004193 cri.go:89] found id: "88520ea2c4ca73535bd06329fc29000af61986adf6366f19c9a796b83180d153"
	I1002 20:27:14.668188 1004193 cri.go:89] found id: "ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b"
	I1002 20:27:14.668195 1004193 cri.go:89] found id: "165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa"
	I1002 20:27:14.668200 1004193 cri.go:89] found id: "cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0"
	I1002 20:27:14.668203 1004193 cri.go:89] found id: "0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1"
	I1002 20:27:14.668206 1004193 cri.go:89] found id: "972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3"
	I1002 20:27:14.668211 1004193 cri.go:89] found id: "020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251"
	I1002 20:27:14.668216 1004193 cri.go:89] found id: "ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c"
	I1002 20:27:14.668219 1004193 cri.go:89] found id: "71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba"
	I1002 20:27:14.668222 1004193 cri.go:89] found id: ""
	I1002 20:27:14.668277 1004193 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 20:27:14.683022 1004193 out.go:203] 
	W1002 20:27:14.685852 1004193 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:27:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:27:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 20:27:14.685873 1004193 out.go:285] * 
	* 
	W1002 20:27:14.693748 1004193 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:27:14.696799 1004193 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-693704 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-693704
helpers_test.go:243: (dbg) docker inspect addons-693704:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277",
	        "Created": "2025-10-02T20:19:07.144298893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 995109,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:19:07.216699876Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/hostname",
	        "HostsPath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/hosts",
	        "LogPath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277-json.log",
	        "Name": "/addons-693704",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-693704:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-693704",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277",
	                "LowerDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997/merged",
	                "UpperDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997/diff",
	                "WorkDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-693704",
	                "Source": "/var/lib/docker/volumes/addons-693704/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-693704",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-693704",
	                "name.minikube.sigs.k8s.io": "addons-693704",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab8175306a77dcd2868d77b0652aff78896362c7258aefc47fe7a07059e18c86",
	            "SandboxKey": "/var/run/docker/netns/ab8175306a77",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-693704": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:98:f0:2f:5f:be",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b2b7a73ec267c22f9c2a0b05d90a02bfb26f74cfccf22ef9af628da6d1b040f0",
	                    "EndpointID": "a29bf68bc8126d88282105e99c5ad7822f95d3abd8c683fc3272ac8e0ad9c3f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-693704",
	                        "d39c48e99245"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-693704 -n addons-693704
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-693704 logs -n 25: (1.349466585s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-926391 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-926391   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ delete  │ -p download-only-926391                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-926391   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ start   │ -o=json --download-only -p download-only-569491 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-569491   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ delete  │ -p download-only-569491                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-569491   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ delete  │ -p download-only-926391                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-926391   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ delete  │ -p download-only-569491                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-569491   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ start   │ --download-only -p download-docker-496636 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-496636 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ -p download-docker-496636                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-496636 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ start   │ --download-only -p binary-mirror-261948 --alsologtostderr --binary-mirror http://127.0.0.1:38235 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-261948   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ -p binary-mirror-261948                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-261948   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ addons  │ disable dashboard -p addons-693704                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ addons  │ enable dashboard -p addons-693704                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ start   │ -p addons-693704 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:21 UTC │
	│ addons  │ addons-693704 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │                     │
	│ addons  │ addons-693704 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │                     │
	│ addons  │ addons-693704 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │                     │
	│ ip      │ addons-693704 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │ 02 Oct 25 20:21 UTC │
	│ addons  │ addons-693704 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ addons  │ addons-693704 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ addons  │ addons-693704 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ addons  │ addons-693704 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ addons  │ enable headlamp -p addons-693704 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:18:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:18:42.587429  994709 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:18:42.587660  994709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:42.587694  994709 out.go:374] Setting ErrFile to fd 2...
	I1002 20:18:42.587713  994709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:42.588005  994709 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:18:42.588496  994709 out.go:368] Setting JSON to false
	I1002 20:18:42.589377  994709 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18060,"bootTime":1759418263,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 20:18:42.589480  994709 start.go:140] virtualization:  
	I1002 20:18:42.592863  994709 out.go:179] * [addons-693704] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:18:42.596651  994709 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:18:42.596802  994709 notify.go:221] Checking for updates...
	I1002 20:18:42.602490  994709 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:18:42.605403  994709 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:18:42.608387  994709 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 20:18:42.611210  994709 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:18:42.614017  994709 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:18:42.617196  994709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:18:42.641430  994709 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:18:42.641548  994709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:42.702297  994709 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-02 20:18:42.693145863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:18:42.702404  994709 docker.go:319] overlay module found
	I1002 20:18:42.705389  994709 out.go:179] * Using the docker driver based on user configuration
	I1002 20:18:42.708231  994709 start.go:306] selected driver: docker
	I1002 20:18:42.708247  994709 start.go:936] validating driver "docker" against <nil>
	I1002 20:18:42.708259  994709 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:18:42.708953  994709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:42.762696  994709 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-02 20:18:42.753788413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:18:42.762850  994709 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:18:42.763087  994709 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:18:42.766074  994709 out.go:179] * Using Docker driver with root privileges
	I1002 20:18:42.768763  994709 cni.go:84] Creating CNI manager for ""
	I1002 20:18:42.768836  994709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:18:42.768849  994709 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:18:42.768919  994709 start.go:350] cluster config:
	{Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1002 20:18:42.771909  994709 out.go:179] * Starting "addons-693704" primary control-plane node in "addons-693704" cluster
	I1002 20:18:42.774712  994709 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:18:42.777590  994709 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:18:42.780428  994709 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:42.780455  994709 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:18:42.780491  994709 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 20:18:42.780500  994709 cache.go:59] Caching tarball of preloaded images
	I1002 20:18:42.780575  994709 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 20:18:42.780584  994709 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:18:42.780914  994709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/config.json ...
	I1002 20:18:42.780943  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/config.json: {Name:mkd60ee77440eccb122eacb378637e77c2fde5a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:18:42.795665  994709 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:18:42.795798  994709 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 20:18:42.795824  994709 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 20:18:42.795836  994709 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 20:18:42.795846  994709 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 20:18:42.795852  994709 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 20:19:00.985065  994709 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 20:19:00.985108  994709 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:19:00.985137  994709 start.go:361] acquireMachinesLock for addons-693704: {Name:mkeb9eb5752430ab2d33310b44640ce93b8d2df4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:19:00.985263  994709 start.go:365] duration metric: took 102.298µs to acquireMachinesLock for "addons-693704"
	I1002 20:19:00.985295  994709 start.go:94] Provisioning new machine with config: &{Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:19:00.985372  994709 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:19:00.988832  994709 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 20:19:00.989104  994709 start.go:160] libmachine.API.Create for "addons-693704" (driver="docker")
	I1002 20:19:00.989159  994709 client.go:168] LocalClient.Create starting
	I1002 20:19:00.989296  994709 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem
	I1002 20:19:01.433837  994709 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem
	I1002 20:19:01.564238  994709 cli_runner.go:164] Run: docker network inspect addons-693704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:19:01.580044  994709 cli_runner.go:211] docker network inspect addons-693704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:19:01.580136  994709 network_create.go:284] running [docker network inspect addons-693704] to gather additional debugging logs...
	I1002 20:19:01.580158  994709 cli_runner.go:164] Run: docker network inspect addons-693704
	W1002 20:19:01.596534  994709 cli_runner.go:211] docker network inspect addons-693704 returned with exit code 1
	I1002 20:19:01.596569  994709 network_create.go:287] error running [docker network inspect addons-693704]: docker network inspect addons-693704: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-693704 not found
	I1002 20:19:01.596590  994709 network_create.go:289] output of [docker network inspect addons-693704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-693704 not found
	
	** /stderr **
	I1002 20:19:01.596688  994709 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:19:01.612608  994709 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018f17c0}
	I1002 20:19:01.612647  994709 network_create.go:124] attempt to create docker network addons-693704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:19:01.612711  994709 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-693704 addons-693704
	I1002 20:19:01.677264  994709 network_create.go:108] docker network addons-693704 192.168.49.0/24 created
	I1002 20:19:01.677303  994709 kic.go:121] calculated static IP "192.168.49.2" for the "addons-693704" container
	I1002 20:19:01.677378  994709 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:19:01.693107  994709 cli_runner.go:164] Run: docker volume create addons-693704 --label name.minikube.sigs.k8s.io=addons-693704 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:19:01.711600  994709 oci.go:103] Successfully created a docker volume addons-693704
	I1002 20:19:01.711704  994709 cli_runner.go:164] Run: docker run --rm --name addons-693704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-693704 --entrypoint /usr/bin/test -v addons-693704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:19:02.731832  994709 cli_runner.go:217] Completed: docker run --rm --name addons-693704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-693704 --entrypoint /usr/bin/test -v addons-693704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (1.020058685s)
	I1002 20:19:02.731865  994709 oci.go:107] Successfully prepared a docker volume addons-693704
	I1002 20:19:02.731897  994709 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:19:02.731915  994709 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:19:02.731979  994709 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-693704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:19:07.072259  994709 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-693704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.340238594s)
	I1002 20:19:07.072312  994709 kic.go:203] duration metric: took 4.340372991s to extract preloaded images to volume ...
	W1002 20:19:07.072445  994709 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 20:19:07.072554  994709 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:19:07.131614  994709 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-693704 --name addons-693704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-693704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-693704 --network addons-693704 --ip 192.168.49.2 --volume addons-693704:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:19:07.425756  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Running}}
	I1002 20:19:07.450427  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:07.471353  994709 cli_runner.go:164] Run: docker exec addons-693704 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:19:07.519322  994709 oci.go:144] the created container "addons-693704" has a running status.
	I1002 20:19:07.519348  994709 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa...
	I1002 20:19:07.874970  994709 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:19:07.902253  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:07.924631  994709 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:19:07.924649  994709 kic_runner.go:114] Args: [docker exec --privileged addons-693704 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:19:07.982879  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:08.009002  994709 machine.go:93] provisionDockerMachine start ...
	I1002 20:19:08.009096  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:08.026925  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:08.027256  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:08.027273  994709 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:19:08.027902  994709 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 20:19:11.161848  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-693704
	
	I1002 20:19:11.161874  994709 ubuntu.go:182] provisioning hostname "addons-693704"
	I1002 20:19:11.161998  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.180011  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.180318  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:11.180334  994709 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-693704 && echo "addons-693704" | sudo tee /etc/hostname
	I1002 20:19:11.318599  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-693704
	
	I1002 20:19:11.318673  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.334766  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.335074  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:11.335095  994709 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-693704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-693704/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-693704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:19:11.466309  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:19:11.466378  994709 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 20:19:11.466405  994709 ubuntu.go:190] setting up certificates
	I1002 20:19:11.466416  994709 provision.go:84] configureAuth start
	I1002 20:19:11.466491  994709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-693704
	I1002 20:19:11.484411  994709 provision.go:143] copyHostCerts
	I1002 20:19:11.484497  994709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 20:19:11.484648  994709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 20:19:11.484708  994709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 20:19:11.484757  994709 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.addons-693704 san=[127.0.0.1 192.168.49.2 addons-693704 localhost minikube]
	I1002 20:19:11.600457  994709 provision.go:177] copyRemoteCerts
	I1002 20:19:11.600526  994709 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:19:11.600571  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.617715  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:11.713831  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 20:19:11.731711  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 20:19:11.748544  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:19:11.765398  994709 provision.go:87] duration metric: took 298.94846ms to configureAuth
	I1002 20:19:11.765428  994709 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:19:11.765610  994709 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:19:11.765720  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.782571  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.782895  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:11.782917  994709 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:19:12.024388  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:19:12.024409  994709 machine.go:96] duration metric: took 4.015387209s to provisionDockerMachine
	I1002 20:19:12.024420  994709 client.go:171] duration metric: took 11.035249443s to LocalClient.Create
	I1002 20:19:12.024430  994709 start.go:168] duration metric: took 11.035328481s to libmachine.API.Create "addons-693704"
	I1002 20:19:12.024438  994709 start.go:294] postStartSetup for "addons-693704" (driver="docker")
	I1002 20:19:12.024448  994709 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:19:12.024531  994709 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:19:12.024581  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.046435  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.145575  994709 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:19:12.148535  994709 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:19:12.148564  994709 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:19:12.148574  994709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 20:19:12.148638  994709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 20:19:12.148666  994709 start.go:297] duration metric: took 124.222688ms for postStartSetup
	I1002 20:19:12.148981  994709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-693704
	I1002 20:19:12.164538  994709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/config.json ...
	I1002 20:19:12.164807  994709 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:19:12.164866  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.181186  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.274914  994709 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:19:12.279510  994709 start.go:129] duration metric: took 11.294122752s to createHost
	I1002 20:19:12.279576  994709 start.go:84] releasing machines lock for "addons-693704", held for 11.294297786s
	I1002 20:19:12.279683  994709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-693704
	I1002 20:19:12.298232  994709 ssh_runner.go:195] Run: cat /version.json
	I1002 20:19:12.298284  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.298302  994709 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:19:12.298368  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.327555  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.332727  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.506484  994709 ssh_runner.go:195] Run: systemctl --version
	I1002 20:19:12.512752  994709 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:19:12.553418  994709 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:19:12.557546  994709 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:19:12.557619  994709 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:19:12.586608  994709 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 20:19:12.586633  994709 start.go:496] detecting cgroup driver to use...
	I1002 20:19:12.586667  994709 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:19:12.586718  994709 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:19:12.605523  994709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:19:12.618955  994709 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:19:12.619019  994709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:19:12.636190  994709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:19:12.655245  994709 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:19:12.773294  994709 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:19:12.899674  994709 docker.go:234] disabling docker service ...
	I1002 20:19:12.899796  994709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:19:12.921306  994709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:19:12.935583  994709 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:19:13.058429  994709 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:19:13.191274  994709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:19:13.203980  994709 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:19:13.218083  994709 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:19:13.218172  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.227208  994709 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 20:19:13.227310  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.236115  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.244683  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.253282  994709 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:19:13.260942  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.269710  994709 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.282906  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.291613  994709 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:19:13.298701  994709 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:19:13.306154  994709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:19:13.416108  994709 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:19:13.549800  994709 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:19:13.549963  994709 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:19:13.553947  994709 start.go:564] Will wait 60s for crictl version
	I1002 20:19:13.554015  994709 ssh_runner.go:195] Run: which crictl
	I1002 20:19:13.557729  994709 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:19:13.584434  994709 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:19:13.584598  994709 ssh_runner.go:195] Run: crio --version
	I1002 20:19:13.611885  994709 ssh_runner.go:195] Run: crio --version
	I1002 20:19:13.643761  994709 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:19:13.646706  994709 cli_runner.go:164] Run: docker network inspect addons-693704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:19:13.662159  994709 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:19:13.665953  994709 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:19:13.675384  994709 kubeadm.go:883] updating cluster {Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:19:13.675498  994709 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:19:13.675559  994709 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:19:13.707568  994709 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:19:13.707592  994709 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:19:13.707650  994709 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:19:13.733091  994709 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:19:13.733117  994709 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:19:13.733126  994709 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:19:13.733260  994709 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-693704 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:19:13.733342  994709 ssh_runner.go:195] Run: crio config
	I1002 20:19:13.792130  994709 cni.go:84] Creating CNI manager for ""
	I1002 20:19:13.792153  994709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:19:13.792194  994709 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:19:13.792227  994709 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-693704 NodeName:addons-693704 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:19:13.792401  994709 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-693704"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:19:13.792492  994709 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:19:13.800668  994709 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:19:13.800767  994709 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:19:13.808293  994709 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 20:19:13.821242  994709 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:19:13.834169  994709 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1002 20:19:13.846928  994709 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:19:13.850566  994709 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:19:13.860224  994709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:19:13.968588  994709 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:19:13.985352  994709 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704 for IP: 192.168.49.2
	I1002 20:19:13.985422  994709 certs.go:195] generating shared ca certs ...
	I1002 20:19:13.985470  994709 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:13.985658  994709 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 20:19:15.330293  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt ...
	I1002 20:19:15.330325  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt: {Name:mk4cd3e6dd08eb98d92774a50706472e7144a029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.330529  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key ...
	I1002 20:19:15.330543  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key: {Name:mk973528442a241534dab3b3f10010ef617c41eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.330647  994709 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 20:19:15.997150  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt ...
	I1002 20:19:15.997181  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt: {Name:mk99f3de897f678c1a5844576ab27113951f2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.997373  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key ...
	I1002 20:19:15.997386  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key: {Name:mka357a75cbeebaba7cc94478a077ee2190bafb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.997484  994709 certs.go:257] generating profile certs ...
	I1002 20:19:15.997541  994709 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.key
	I1002 20:19:15.997561  994709 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt with IP's: []
	I1002 20:19:16.185268  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt ...
	I1002 20:19:16.185298  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: {Name:mk19c4790d2aed31a89cf09dcf81ae3f076c409b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.185485  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.key ...
	I1002 20:19:16.185498  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.key: {Name:mk1b58c21fd0fb98ae80d1aeead9a8a2c7b84f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.185581  994709 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d
	I1002 20:19:16.185600  994709 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:19:16.909759  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d ...
	I1002 20:19:16.909792  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d: {Name:mkcdcc8a35d2bead0bc666b364b50007c53b8ec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.910784  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d ...
	I1002 20:19:16.910803  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d: {Name:mk54e705787535bd0f02f9a6cb06ac271457b26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.911454  994709 certs.go:382] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt
	I1002 20:19:16.911552  994709 certs.go:386] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key
	I1002 20:19:16.911609  994709 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key
	I1002 20:19:16.911632  994709 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt with IP's: []
	I1002 20:19:17.189632  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt ...
	I1002 20:19:17.189663  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt: {Name:mkc2967e5b8de8de5ffc244b2174ce7d1307c7fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:17.189855  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key ...
	I1002 20:19:17.189870  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key: {Name:mk3a5d9aa39ed72b68b1236fc674f044b595f3a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:17.190670  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 20:19:17.190720  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 20:19:17.190746  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:19:17.190775  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 20:19:17.191345  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:19:17.209222  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:19:17.228051  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:19:17.245976  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 20:19:17.263876  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 20:19:17.281588  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:19:17.300066  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:19:17.317623  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:19:17.335889  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:19:17.355499  994709 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:19:17.368597  994709 ssh_runner.go:195] Run: openssl version
	I1002 20:19:17.375290  994709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:19:17.383559  994709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:19:17.387356  994709 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:19:17.387462  994709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:19:17.428204  994709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:19:17.436613  994709 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:19:17.440314  994709 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:19:17.440367  994709 kubeadm.go:400] StartCluster: {Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:19:17.440454  994709 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:19:17.440516  994709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:19:17.467595  994709 cri.go:89] found id: ""
	I1002 20:19:17.467677  994709 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:19:17.475494  994709 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:19:17.483312  994709 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:19:17.483390  994709 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:19:17.491411  994709 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:19:17.491431  994709 kubeadm.go:157] found existing configuration files:
	
	I1002 20:19:17.491483  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:19:17.499089  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:19:17.499169  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:19:17.506794  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:19:17.514714  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:19:17.514785  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:19:17.522181  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:19:17.530993  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:19:17.531060  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:19:17.538976  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:19:17.546795  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:19:17.546892  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:19:17.554492  994709 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:19:17.596193  994709 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:19:17.596303  994709 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:19:17.627320  994709 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:19:17.627397  994709 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 20:19:17.627440  994709 kubeadm.go:318] OS: Linux
	I1002 20:19:17.627493  994709 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:19:17.627548  994709 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 20:19:17.627604  994709 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:19:17.627659  994709 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:19:17.627714  994709 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:19:17.627769  994709 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:19:17.627820  994709 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:19:17.627872  994709 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:19:17.627924  994709 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 20:19:17.698891  994709 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:19:17.699015  994709 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:19:17.699132  994709 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:19:17.708645  994709 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:19:17.711822  994709 out.go:252]   - Generating certificates and keys ...
	I1002 20:19:17.711957  994709 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:19:17.712048  994709 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:19:17.858214  994709 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:19:19.472133  994709 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:19:19.853869  994709 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:19:20.278527  994709 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:19:21.038810  994709 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:19:21.039005  994709 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-693704 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:19:21.583298  994709 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:19:21.583465  994709 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-693704 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:19:22.178821  994709 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:19:22.869729  994709 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:19:23.067072  994709 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:19:23.067180  994709 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:19:23.190079  994709 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:19:23.633624  994709 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:19:23.861907  994709 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:19:24.252326  994709 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:19:24.757359  994709 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:19:24.758089  994709 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:19:24.760711  994709 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:19:24.764198  994709 out.go:252]   - Booting up control plane ...
	I1002 20:19:24.764310  994709 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:19:24.764403  994709 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:19:24.764489  994709 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:19:24.780867  994709 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:19:24.781188  994709 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:19:24.788581  994709 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:19:24.789049  994709 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:19:24.789397  994709 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:19:24.926323  994709 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:19:24.926459  994709 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:19:26.427259  994709 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501639322s
	I1002 20:19:26.430848  994709 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:19:26.430969  994709 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:19:26.431069  994709 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:19:26.431155  994709 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:19:28.445585  994709 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.013932999s
	I1002 20:19:30.026061  994709 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.595131543s
	I1002 20:19:31.934100  994709 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.501085496s
	I1002 20:19:31.955369  994709 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 20:19:31.978849  994709 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 20:19:32.006745  994709 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 20:19:32.007240  994709 kubeadm.go:318] [mark-control-plane] Marking the node addons-693704 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 20:19:32.024906  994709 kubeadm.go:318] [bootstrap-token] Using token: 1gg1hv.lld6lawd4ni62mxk
	I1002 20:19:32.028031  994709 out.go:252]   - Configuring RBAC rules ...
	I1002 20:19:32.028186  994709 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 20:19:32.038937  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 20:19:32.049818  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 20:19:32.054935  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 20:19:32.062162  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 20:19:32.070713  994709 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 20:19:32.338182  994709 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 20:19:32.784741  994709 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 20:19:33.338747  994709 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 20:19:33.340165  994709 kubeadm.go:318] 
	I1002 20:19:33.340273  994709 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 20:19:33.340285  994709 kubeadm.go:318] 
	I1002 20:19:33.340381  994709 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 20:19:33.340391  994709 kubeadm.go:318] 
	I1002 20:19:33.340426  994709 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 20:19:33.340507  994709 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 20:19:33.340581  994709 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 20:19:33.340595  994709 kubeadm.go:318] 
	I1002 20:19:33.340666  994709 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 20:19:33.340674  994709 kubeadm.go:318] 
	I1002 20:19:33.340728  994709 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 20:19:33.340734  994709 kubeadm.go:318] 
	I1002 20:19:33.340801  994709 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 20:19:33.340885  994709 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 20:19:33.340967  994709 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 20:19:33.340973  994709 kubeadm.go:318] 
	I1002 20:19:33.341069  994709 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 20:19:33.341173  994709 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 20:19:33.341179  994709 kubeadm.go:318] 
	I1002 20:19:33.341310  994709 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 1gg1hv.lld6lawd4ni62mxk \
	I1002 20:19:33.341442  994709 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 \
	I1002 20:19:33.341466  994709 kubeadm.go:318] 	--control-plane 
	I1002 20:19:33.341470  994709 kubeadm.go:318] 
	I1002 20:19:33.341572  994709 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 20:19:33.341578  994709 kubeadm.go:318] 
	I1002 20:19:33.341672  994709 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 1gg1hv.lld6lawd4ni62mxk \
	I1002 20:19:33.341797  994709 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 
	I1002 20:19:33.345719  994709 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 20:19:33.345963  994709 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 20:19:33.346097  994709 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:19:33.346131  994709 cni.go:84] Creating CNI manager for ""
	I1002 20:19:33.346146  994709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:19:33.349554  994709 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 20:19:33.352542  994709 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 20:19:33.358001  994709 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 20:19:33.358065  994709 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 20:19:33.375272  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 20:19:33.656465  994709 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:19:33.656564  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:33.656619  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-693704 minikube.k8s.io/updated_at=2025_10_02T20_19_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b minikube.k8s.io/name=addons-693704 minikube.k8s.io/primary=true
	I1002 20:19:33.838722  994709 ops.go:34] apiserver oom_adj: -16
	I1002 20:19:33.838894  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:34.339235  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:34.839327  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:35.339115  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:35.839347  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:36.339936  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:36.838951  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:37.339896  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:37.839301  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:37.981403  994709 kubeadm.go:1113] duration metric: took 4.324906426s to wait for elevateKubeSystemPrivileges
	I1002 20:19:37.981430  994709 kubeadm.go:402] duration metric: took 20.541068078s to StartCluster
	I1002 20:19:37.981448  994709 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:37.982146  994709 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:19:37.982540  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:37.982732  994709 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:19:37.982850  994709 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 20:19:37.983086  994709 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:19:37.983116  994709 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 20:19:37.983227  994709 addons.go:69] Setting yakd=true in profile "addons-693704"
	I1002 20:19:37.983240  994709 addons.go:238] Setting addon yakd=true in "addons-693704"
	I1002 20:19:37.983262  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.983805  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.983948  994709 addons.go:69] Setting inspektor-gadget=true in profile "addons-693704"
	I1002 20:19:37.983963  994709 addons.go:238] Setting addon inspektor-gadget=true in "addons-693704"
	I1002 20:19:37.983984  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.984372  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.984784  994709 addons.go:69] Setting metrics-server=true in profile "addons-693704"
	I1002 20:19:37.984803  994709 addons.go:238] Setting addon metrics-server=true in "addons-693704"
	I1002 20:19:37.984846  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.985255  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.986812  994709 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-693704"
	I1002 20:19:37.987111  994709 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-693704"
	I1002 20:19:37.987164  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.986986  994709 addons.go:69] Setting cloud-spanner=true in profile "addons-693704"
	I1002 20:19:37.988662  994709 addons.go:238] Setting addon cloud-spanner=true in "addons-693704"
	I1002 20:19:37.988715  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.989206  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.986995  994709 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-693704"
	I1002 20:19:37.992261  994709 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-693704"
	I1002 20:19:37.992347  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.993008  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.993440  994709 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-693704"
	I1002 20:19:37.993470  994709 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-693704"
	I1002 20:19:37.993496  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.993939  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.986999  994709 addons.go:69] Setting default-storageclass=true in profile "addons-693704"
	I1002 20:19:37.999991  994709 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-693704"
	I1002 20:19:37.987003  994709 addons.go:69] Setting gcp-auth=true in profile "addons-693704"
	I1002 20:19:38.001780  994709 mustload.go:65] Loading cluster: addons-693704
	I1002 20:19:38.002068  994709 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:19:38.002442  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.004847  994709 addons.go:69] Setting registry=true in profile "addons-693704"
	I1002 20:19:38.004895  994709 addons.go:238] Setting addon registry=true in "addons-693704"
	I1002 20:19:38.004938  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.006258  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.987015  994709 addons.go:69] Setting ingress=true in profile "addons-693704"
	I1002 20:19:38.027270  994709 addons.go:238] Setting addon ingress=true in "addons-693704"
	I1002 20:19:38.027361  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.027894  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.987020  994709 addons.go:69] Setting ingress-dns=true in profile "addons-693704"
	I1002 20:19:38.058307  994709 addons.go:238] Setting addon ingress-dns=true in "addons-693704"
	I1002 20:19:38.058379  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.058921  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.096850  994709 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 20:19:38.105676  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 20:19:38.105709  994709 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 20:19:38.105842  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.008072  994709 out.go:179] * Verifying Kubernetes components...
	I1002 20:19:38.008152  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.026483  994709 addons.go:69] Setting registry-creds=true in profile "addons-693704"
	I1002 20:19:38.116211  994709 addons.go:238] Setting addon registry-creds=true in "addons-693704"
	I1002 20:19:38.116261  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.116877  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.148060  994709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:19:38.026500  994709 addons.go:69] Setting storage-provisioner=true in profile "addons-693704"
	I1002 20:19:38.148217  994709 addons.go:238] Setting addon storage-provisioner=true in "addons-693704"
	I1002 20:19:38.148254  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.148800  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.026507  994709 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-693704"
	I1002 20:19:38.181689  994709 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-693704"
	I1002 20:19:38.026527  994709 addons.go:69] Setting volcano=true in profile "addons-693704"
	I1002 20:19:38.185000  994709 addons.go:238] Setting addon volcano=true in "addons-693704"
	I1002 20:19:38.185048  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.200337  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.026533  994709 addons.go:69] Setting volumesnapshots=true in profile "addons-693704"
	I1002 20:19:38.221856  994709 addons.go:238] Setting addon volumesnapshots=true in "addons-693704"
	I1002 20:19:38.221908  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.222576  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.234975  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.241128  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 20:19:38.241462  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.027224  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.264137  994709 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 20:19:38.269034  994709 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 20:19:38.269076  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 20:19:38.269173  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.294256  994709 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 20:19:38.298092  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 20:19:38.298232  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 20:19:38.298258  994709 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 20:19:38.298339  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.305328  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 20:19:38.326652  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:19:38.333498  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:19:38.339026  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 20:19:38.339916  994709 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 20:19:38.340074  994709 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 20:19:38.348717  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 20:19:38.349240  994709 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:19:38.349263  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 20:19:38.349335  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.370496  994709 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 20:19:38.370522  994709 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 20:19:38.370590  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.393413  994709 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:19:38.393443  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 20:19:38.393518  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.393705  994709 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 20:19:38.401523  994709 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:19:38.401566  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 20:19:38.401656  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.415528  994709 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 20:19:38.419444  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 20:19:38.424637  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 20:19:38.430455  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 20:19:38.433425  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 20:19:38.434098  994709 out.go:179]   - Using image docker.io/registry:3.0.0
	W1002 20:19:38.437996  994709 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1002 20:19:38.442715  994709 addons.go:238] Setting addon default-storageclass=true in "addons-693704"
	I1002 20:19:38.442755  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.443165  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.443728  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.447652  994709 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 20:19:38.447679  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 20:19:38.447744  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.463660  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.464460  994709 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:19:38.466815  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 20:19:38.467693  994709 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:19:38.467719  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:19:38.467819  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.470864  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 20:19:38.470890  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 20:19:38.470960  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.500926  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.502016  994709 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 20:19:38.503153  994709 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 20:19:38.510195  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 20:19:38.510222  994709 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 20:19:38.510304  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.511213  994709 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 20:19:38.512545  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.514344  994709 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-693704"
	I1002 20:19:38.514385  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.514794  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.538485  994709 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:19:38.538505  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 20:19:38.538577  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.563237  994709 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:19:38.563266  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 20:19:38.563330  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.573905  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.605278  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.621692  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.637902  994709 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:19:38.637933  994709 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:19:38.638002  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.655698  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.682118  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.689646  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.707346  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.731079  994709 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 20:19:38.738329  994709 out.go:179]   - Using image docker.io/busybox:stable
	I1002 20:19:38.738517  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.739582  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	W1002 20:19:38.741646  994709 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:19:38.741686  994709 retry.go:31] will retry after 354.664397ms: ssh: handshake failed: EOF
	I1002 20:19:38.741822  994709 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:19:38.741834  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 20:19:38.741914  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.754174  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.790638  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	W1002 20:19:38.791850  994709 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:19:38.791874  994709 retry.go:31] will retry after 168.291026ms: ssh: handshake failed: EOF
	I1002 20:19:38.891518  994709 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1002 20:19:38.961324  994709 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:19:38.961355  994709 retry.go:31] will retry after 311.734351ms: ssh: handshake failed: EOF
	I1002 20:19:39.180793  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 20:19:39.180831  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 20:19:39.246769  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 20:19:39.246793  994709 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 20:19:39.317148  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 20:19:39.317174  994709 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 20:19:39.327274  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 20:19:39.369305  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:19:39.371258  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:19:39.386300  994709 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 20:19:39.386327  994709 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 20:19:39.412476  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 20:19:39.412502  994709 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 20:19:39.447295  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:19:39.454691  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 20:19:39.454712  994709 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 20:19:39.483532  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:19:39.489546  994709 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:39.489572  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 20:19:39.600950  994709 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 20:19:39.600977  994709 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 20:19:39.608088  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:19:39.608113  994709 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 20:19:39.625123  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 20:19:39.625149  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 20:19:39.646231  994709 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:19:39.646256  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 20:19:39.666494  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:39.667190  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:19:39.667209  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 20:19:39.670888  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:19:39.686238  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:19:39.763670  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:19:39.778706  994709 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 20:19:39.778734  994709 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 20:19:39.800126  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:19:39.803147  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:19:39.824074  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 20:19:39.824103  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 20:19:39.826926  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:19:39.887787  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:19:39.970247  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 20:19:39.970276  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 20:19:39.982837  994709 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 20:19:39.982863  994709 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 20:19:40.095977  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 20:19:40.096005  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 20:19:40.202267  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 20:19:40.202301  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 20:19:40.252464  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 20:19:40.252492  994709 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 20:19:40.425953  994709 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:19:40.425979  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 20:19:40.440769  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 20:19:40.440793  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 20:19:40.651360  994709 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.759801869s)
	I1002 20:19:40.651360  994709 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.140115117s)
	I1002 20:19:40.651466  994709 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 20:19:40.652113  994709 node_ready.go:35] waiting up to 6m0s for node "addons-693704" to be "Ready" ...
	I1002 20:19:40.708925  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:19:40.740283  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 20:19:40.740311  994709 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 20:19:41.000182  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 20:19:41.000218  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 20:19:41.157742  994709 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-693704" context rescaled to 1 replicas
	I1002 20:19:41.160542  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 20:19:41.160566  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 20:19:41.368904  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 20:19:41.368930  994709 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 20:19:41.434210  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.106899571s)
	I1002 20:19:41.434277  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.064948233s)
	I1002 20:19:41.546392  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1002 20:19:42.681278  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:44.305558  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.934264933s)
	I1002 20:19:44.305591  994709 addons.go:479] Verifying addon ingress=true in "addons-693704"
	I1002 20:19:44.305742  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.858421462s)
	I1002 20:19:44.305803  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.822248913s)
	I1002 20:19:44.306140  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.639611107s)
	W1002 20:19:44.306168  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:44.306190  994709 retry.go:31] will retry after 271.617135ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:44.306249  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.635338018s)
	I1002 20:19:44.306301  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.62003767s)
	I1002 20:19:44.306341  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.542651372s)
	I1002 20:19:44.306505  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.506354272s)
	I1002 20:19:44.306533  994709 addons.go:479] Verifying addon registry=true in "addons-693704"
	I1002 20:19:44.306707  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.503533192s)
	I1002 20:19:44.306720  994709 addons.go:479] Verifying addon metrics-server=true in "addons-693704"
	I1002 20:19:44.306759  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.479800741s)
	I1002 20:19:44.307143  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.41932494s)
	I1002 20:19:44.307220  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.598267016s)
	W1002 20:19:44.307774  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:19:44.307787  994709 retry.go:31] will retry after 292.505551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:19:44.308765  994709 out.go:179] * Verifying ingress addon...
	I1002 20:19:44.312945  994709 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-693704 service yakd-dashboard -n yakd-dashboard
	
	I1002 20:19:44.313054  994709 out.go:179] * Verifying registry addon...
	I1002 20:19:44.315485  994709 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 20:19:44.317462  994709 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 20:19:44.330428  994709 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 20:19:44.330450  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:44.330653  994709 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 20:19:44.330663  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 20:19:44.357589  994709 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 20:19:44.577967  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:44.601481  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:19:44.645691  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.099252349s)
	I1002 20:19:44.645728  994709 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-693704"
	I1002 20:19:44.650504  994709 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 20:19:44.655039  994709 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 20:19:44.667816  994709 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 20:19:44.667846  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:44.821715  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:44.822383  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 20:19:45.161026  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:45.165268  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:45.325696  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:45.325851  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:45.657820  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:45.818501  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:45.820022  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:45.829170  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.25115874s)
	W1002 20:19:45.829204  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:45.829226  994709 retry.go:31] will retry after 265.136863ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:45.829298  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.227785836s)
	I1002 20:19:45.919439  994709 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 20:19:45.919542  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:45.937711  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:46.064145  994709 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 20:19:46.077198  994709 addons.go:238] Setting addon gcp-auth=true in "addons-693704"
	I1002 20:19:46.077246  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:46.077691  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:46.095085  994709 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 20:19:46.095135  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:46.095095  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:46.123058  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:46.164369  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:46.319756  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:46.321805  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:46.659517  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:46.818237  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:46.819904  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 20:19:46.919069  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:46.919103  994709 retry.go:31] will retry after 624.133237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:46.922816  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:19:46.925777  994709 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 20:19:46.928684  994709 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 20:19:46.928707  994709 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 20:19:46.942491  994709 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 20:19:46.942514  994709 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 20:19:46.955438  994709 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:19:46.955505  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 20:19:46.968124  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:19:47.157960  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:47.322368  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:47.322695  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:47.436497  994709 addons.go:479] Verifying addon gcp-auth=true in "addons-693704"
	I1002 20:19:47.440771  994709 out.go:179] * Verifying gcp-auth addon...
	I1002 20:19:47.444303  994709 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 20:19:47.456952  994709 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 20:19:47.457022  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:47.544036  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:19:47.655544  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:47.657853  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:47.819482  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:47.821740  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:47.947877  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:48.158799  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:48.321611  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:48.322176  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 20:19:48.351318  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:48.351351  994709 retry.go:31] will retry after 722.588456ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:48.447412  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:48.658545  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:48.819500  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:48.821008  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:48.947811  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:49.074176  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:49.159044  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:49.319369  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:49.321354  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:49.447565  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:49.655967  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:49.657396  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:49.821534  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:49.821767  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 20:19:49.880261  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:49.880299  994709 retry.go:31] will retry after 823.045422ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:49.948030  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:50.158812  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:50.318859  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:50.321025  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:50.448207  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:50.657430  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:50.703742  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:50.819118  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:50.821057  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:50.948368  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:51.157785  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:51.320463  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:51.321544  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:51.448039  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:51.519077  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:51.519109  994709 retry.go:31] will retry after 1.329942428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:51.658147  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:51.820515  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:51.820951  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:51.947804  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:52.155980  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:52.158167  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:52.319637  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:52.321091  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:52.448243  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:52.657697  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:52.819249  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:52.821572  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:52.849787  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:52.949420  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:53.160825  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:53.319057  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:53.321137  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:53.448348  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:53.651601  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:53.651634  994709 retry.go:31] will retry after 4.065518596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:53.657468  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:53.820524  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:53.821033  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:53.948075  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:54.157447  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:54.158479  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:54.318431  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:54.320091  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:54.447825  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:54.657905  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:54.819025  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:54.820709  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:54.947593  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:55.158249  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:55.320256  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:55.320691  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:55.447448  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:55.658171  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:55.820678  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:55.821069  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:55.948074  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:56.157411  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:56.319659  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:56.320449  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:56.447640  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:56.655854  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:56.657871  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:56.818780  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:56.820792  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:56.947591  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:57.157766  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:57.318816  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:57.320927  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:57.447823  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:57.657501  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:57.717603  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:57.820669  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:57.822065  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:57.948192  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:58.157875  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:58.319486  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:58.321536  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:58.447507  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:58.508047  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:58.508078  994709 retry.go:31] will retry after 6.392155287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:58.657525  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:58.818599  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:58.820265  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:58.947800  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:59.155950  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:59.158057  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:59.321355  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:59.321502  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:59.447568  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:59.657515  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:59.818527  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:59.820423  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:59.947158  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:00.191965  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:00.322779  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:00.323712  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:00.462450  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:00.662487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:00.820978  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:00.821119  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:00.947103  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:01.165936  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:01.319105  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:01.321152  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:01.448705  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:01.656452  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:01.660465  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:01.820149  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:01.822237  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:01.949425  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:02.159485  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:02.320094  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:02.320855  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:02.447847  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:02.658087  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:02.822950  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:02.823232  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:02.948025  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:03.158590  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:03.318905  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:03.321355  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:03.447723  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:03.657871  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:03.821238  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:03.821662  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:03.947536  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:04.157181  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:04.158586  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:04.319406  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:04.320569  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:04.448026  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:04.657883  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:04.821087  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:04.821316  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:04.900418  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:04.947850  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:05.159494  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:05.319260  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:05.321183  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:05.448018  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:05.659872  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 20:20:05.704226  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:05.704266  994709 retry.go:31] will retry after 4.650395594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:05.819910  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:05.820237  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:05.947300  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:06.157427  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:06.158681  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:06.319989  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:06.321509  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:06.447503  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:06.658321  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:06.819075  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:06.820269  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:06.948556  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:07.158188  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:07.319456  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:07.320273  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:07.448160  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:07.657768  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:07.820523  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:07.821011  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:07.947761  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:08.157867  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:08.323022  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:08.323328  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:08.447949  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:08.655164  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:08.657821  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:08.820915  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:08.822270  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:08.947285  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:09.157631  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:09.319269  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:09.320630  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:09.447999  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:09.657541  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:09.821314  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:09.821825  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:09.947519  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:10.158695  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:10.320550  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:10.322127  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:10.355287  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:10.448320  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:10.655677  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:10.658684  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:10.819582  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:10.820893  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:10.948135  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:11.160067  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 20:20:11.205481  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:11.205529  994709 retry.go:31] will retry after 8.886793783s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:11.319286  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:11.320699  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:11.447959  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:11.658932  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:11.818675  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:11.820427  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:11.947127  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:12.157818  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:12.319903  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:12.320793  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:12.447987  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:12.657853  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:12.819021  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:12.820692  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:12.947551  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:13.156319  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:13.159173  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:13.319051  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:13.321143  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:13.448160  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:13.657596  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:13.820773  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:13.820960  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:13.948072  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:14.158231  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:14.319445  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:14.320543  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:14.447788  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:14.658082  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:14.819689  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:14.821091  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:14.948202  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:15.157836  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:15.319547  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:15.321065  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:15.448065  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:15.654975  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:15.658703  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:15.819187  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:15.823588  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:15.947274  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:16.158585  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:16.318872  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:16.321029  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:16.448029  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:16.658178  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:16.819331  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:16.819902  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:16.947835  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:17.158511  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:17.319014  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:17.320821  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:17.447892  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:17.658439  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:17.818480  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:17.820595  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:17.947741  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:18.157451  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:18.159031  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:18.320870  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:18.321273  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:18.448214  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:18.658565  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:18.819116  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:18.821998  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:18.948071  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:19.175178  994709 node_ready.go:49] node "addons-693704" is "Ready"
	I1002 20:20:19.175210  994709 node_ready.go:38] duration metric: took 38.523057861s for node "addons-693704" to be "Ready" ...
	I1002 20:20:19.175224  994709 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:20:19.175288  994709 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:19.193541  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:19.198169  994709 api_server.go:72] duration metric: took 41.215410635s to wait for apiserver process to appear ...
	I1002 20:20:19.198244  994709 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:20:19.198278  994709 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 20:20:19.210833  994709 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 20:20:19.213021  994709 api_server.go:141] control plane version: v1.34.1
	I1002 20:20:19.213118  994709 api_server.go:131] duration metric: took 14.852434ms to wait for apiserver health ...
	I1002 20:20:19.213143  994709 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:20:19.259918  994709 system_pods.go:59] 18 kube-system pods found
	I1002 20:20:19.260007  994709 system_pods.go:61] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending
	I1002 20:20:19.260029  994709 system_pods.go:61] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending
	I1002 20:20:19.260046  994709 system_pods.go:61] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending
	I1002 20:20:19.260082  994709 system_pods.go:61] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:19.260110  994709 system_pods.go:61] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:19.260130  994709 system_pods.go:61] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:19.260165  994709 system_pods.go:61] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:19.260195  994709 system_pods.go:61] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 20:20:19.260219  994709 system_pods.go:61] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:19.260254  994709 system_pods.go:61] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:19.260278  994709 system_pods.go:61] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending
	I1002 20:20:19.260300  994709 system_pods.go:61] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending
	I1002 20:20:19.260337  994709 system_pods.go:61] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending
	I1002 20:20:19.260361  994709 system_pods.go:61] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending
	I1002 20:20:19.260379  994709 system_pods.go:61] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending
	I1002 20:20:19.260414  994709 system_pods.go:61] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending
	I1002 20:20:19.260436  994709 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending
	I1002 20:20:19.260455  994709 system_pods.go:61] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending
	I1002 20:20:19.260473  994709 system_pods.go:74] duration metric: took 47.310617ms to wait for pod list to return data ...
	I1002 20:20:19.260513  994709 default_sa.go:34] waiting for default service account to be created ...
	I1002 20:20:19.273557  994709 default_sa.go:45] found service account: "default"
	I1002 20:20:19.273635  994709 default_sa.go:55] duration metric: took 13.103031ms for default service account to be created ...
	I1002 20:20:19.273660  994709 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 20:20:19.293816  994709 system_pods.go:86] 18 kube-system pods found
	I1002 20:20:19.293898  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending
	I1002 20:20:19.293920  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending
	I1002 20:20:19.293938  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending
	I1002 20:20:19.293975  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:19.294002  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:19.294023  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:19.294068  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:19.294095  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending
	I1002 20:20:19.294114  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:19.294148  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:19.294173  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending
	I1002 20:20:19.294198  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending
	I1002 20:20:19.294246  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending
	I1002 20:20:19.294273  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending
	I1002 20:20:19.294296  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending
	I1002 20:20:19.294328  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending
	I1002 20:20:19.294351  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending
	I1002 20:20:19.294370  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending
	I1002 20:20:19.294416  994709 retry.go:31] will retry after 259.220758ms: missing components: kube-dns
	I1002 20:20:19.349532  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:19.350103  994709 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 20:20:19.350175  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:19.523669  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:19.643831  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:19.643867  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending
	I1002 20:20:19.643879  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:19.643887  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:19.643893  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending
	I1002 20:20:19.643899  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:19.643904  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:19.643909  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:19.643918  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:19.643923  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending
	I1002 20:20:19.643931  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:19.643935  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:19.643940  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending
	I1002 20:20:19.643944  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending
	I1002 20:20:19.643948  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending
	I1002 20:20:19.643961  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending
	I1002 20:20:19.643965  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending
	I1002 20:20:19.643972  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:19.643980  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending
	I1002 20:20:19.643985  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending
	I1002 20:20:19.644006  994709 retry.go:31] will retry after 341.024008ms: missing components: kube-dns
	I1002 20:20:19.671892  994709 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 20:20:19.671917  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:19.827024  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:19.828000  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:19.961916  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:20.012275  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:20.012323  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:20:20.012334  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:20.012342  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:20.012350  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:20.012356  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:20.012362  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:20.012372  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:20.012377  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:20.012388  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:20.012400  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:20.012405  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:20.012412  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:20.012423  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:20.012429  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:20.012437  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:20.012448  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:20.012455  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.012463  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.012473  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 20:20:20.012491  994709 retry.go:31] will retry after 476.605934ms: missing components: kube-dns
	I1002 20:20:20.092973  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:20.160870  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:20.323333  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:20.326140  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:20.449179  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:20.500973  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:20.501060  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:20:20.501104  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:20.501129  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:20.501166  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:20.501192  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:20.501214  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:20.501249  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:20.501273  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:20.501296  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:20.501332  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:20.501358  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:20.501381  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:20.501417  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:20.501444  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:20.501467  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:20.501502  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:20.501531  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.501554  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.501589  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Running
	I1002 20:20:20.501625  994709 retry.go:31] will retry after 439.708141ms: missing components: kube-dns
	I1002 20:20:20.672849  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:20.819664  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:20.823622  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:20.948959  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:20.951441  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:20.951521  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:20:20.951545  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:20.951570  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:20.951663  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:20.951686  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:20.951728  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:20.951751  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:20.951769  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:20.951805  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:20.951826  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:20.951847  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:20.951883  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:20.951908  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:20.951932  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:20.951970  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:20.951997  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:20.952021  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.952055  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.952078  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Running
	I1002 20:20:20.952108  994709 retry.go:31] will retry after 739.124115ms: missing components: kube-dns
	I1002 20:20:21.175706  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:21.321496  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:21.322173  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:21.447868  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:21.558307  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.465295653s)
	W1002 20:20:21.558346  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:21.558363  994709 retry.go:31] will retry after 14.276526589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:21.659390  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:21.696852  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:21.696889  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Running
	I1002 20:20:21.696903  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:21.696912  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:21.696919  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:21.696928  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:21.696933  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:21.696952  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:21.696957  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:21.696969  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:21.696973  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:21.696977  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:21.696984  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:21.696990  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:21.696997  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:21.697004  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:21.697010  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:21.697017  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:21.697023  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:21.697030  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Running
	I1002 20:20:21.697039  994709 system_pods.go:126] duration metric: took 2.42335813s to wait for k8s-apps to be running ...
	I1002 20:20:21.697049  994709 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:20:21.697109  994709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:20:21.712608  994709 system_svc.go:56] duration metric: took 15.548645ms WaitForService to wait for kubelet
	I1002 20:20:21.712637  994709 kubeadm.go:586] duration metric: took 43.729883809s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:20:21.712662  994709 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:20:21.716152  994709 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 20:20:21.716184  994709 node_conditions.go:123] node cpu capacity is 2
	I1002 20:20:21.716196  994709 node_conditions.go:105] duration metric: took 3.528491ms to run NodePressure ...
	I1002 20:20:21.716212  994709 start.go:242] waiting for startup goroutines ...
	I1002 20:20:21.822012  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:21.823203  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:21.948612  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:22.159863  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:22.319122  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:22.321160  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:22.448407  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:22.659710  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:22.819576  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:22.822386  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:22.948013  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:23.158517  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:23.320332  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:23.321199  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:23.448043  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:23.658814  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:23.819698  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:23.821542  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:23.947452  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:24.159652  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:24.320759  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:24.321094  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:24.448153  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:24.659358  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:24.818645  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:24.821517  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:24.947484  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:25.159952  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:25.321433  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:25.321885  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:25.447985  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:25.658784  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:25.819014  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:25.821666  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:25.948082  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:26.158745  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:26.320197  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:26.321222  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:26.447719  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:26.659182  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:26.820428  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:26.822051  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:26.948367  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:27.160977  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:27.320573  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:27.321652  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:27.447890  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:27.658939  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:27.818985  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:27.821059  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:27.948366  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:28.161780  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:28.320321  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:28.321410  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:28.447506  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:28.658747  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:28.818976  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:28.821650  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:28.947845  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:29.159622  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:29.319270  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:29.321801  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:29.448168  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:29.658794  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:29.819079  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:29.821429  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:29.947641  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:30.159369  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:30.321561  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:30.321972  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:30.450696  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:30.659510  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:30.819828  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:30.821734  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:30.948076  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:31.159094  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:31.321697  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:31.322081  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:31.448086  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:31.658821  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:31.818887  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:31.821458  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:31.947963  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:32.159614  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:32.320675  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:32.322256  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:32.447303  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:32.659923  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:32.820647  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:32.822321  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:32.947394  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:33.159274  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:33.321237  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:33.321628  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:33.448072  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:33.658574  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:33.818908  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:33.821510  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:33.947671  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:34.159537  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:34.319486  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:34.320732  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:34.447992  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:34.659409  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:34.818851  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:34.821162  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:34.948557  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:35.160400  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:35.319095  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:35.321775  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:35.448790  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:35.659951  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:35.821520  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:35.823605  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:35.835876  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:35.949194  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:36.163303  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:36.369184  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:36.369321  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:36.447567  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:36.659548  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:36.819011  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:36.821548  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:36.947353  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:37.013829  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.177912886s)
	W1002 20:20:37.013873  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:37.013894  994709 retry.go:31] will retry after 16.584617559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:37.159246  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:37.320047  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:37.320218  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:37.447625  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:37.659969  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:37.819508  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:37.822005  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:37.948056  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:38.159157  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:38.319619  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:38.321829  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:38.448325  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:38.659094  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:38.819553  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:38.822084  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:38.948224  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:39.158955  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:39.320358  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:39.321896  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:39.449482  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:39.658678  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:39.819596  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:39.822618  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:39.948042  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:40.159165  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:40.321897  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:40.322102  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:40.448692  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:40.659424  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:40.820442  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:40.822438  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:40.953063  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:41.160230  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:41.324908  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:41.325018  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:41.448365  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:41.659981  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:41.819204  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:41.825800  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:41.948326  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:42.160221  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:42.323678  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:42.323892  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:42.448685  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:42.658968  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:42.820548  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:42.825595  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:42.948014  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:43.164487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:43.325308  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:43.325546  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:43.447728  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:43.659083  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:43.819978  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:43.821102  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:43.948368  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:44.159319  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:44.322007  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:44.323000  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:44.448438  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:44.658701  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:44.818251  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:44.822093  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:44.948073  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:45.161234  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:45.337364  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:45.337615  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:45.448555  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:45.659203  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:45.820630  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:45.822020  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:45.948309  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:46.158793  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:46.322305  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:46.323889  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:46.449028  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:46.658214  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:46.821838  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:46.822319  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:46.948024  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:47.168388  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:47.319302  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:47.321739  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:47.447694  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:47.659702  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:47.818326  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:47.821106  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:47.948063  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:48.159478  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:48.321404  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:48.321977  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:48.448403  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:48.658631  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:48.818698  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:48.820834  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:48.947578  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:49.159437  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:49.321139  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:49.321707  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:49.447554  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:49.659009  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:49.819029  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:49.821580  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:49.947616  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:50.160129  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:50.320228  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:50.321534  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:50.451851  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:50.660002  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:50.820960  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:50.822905  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:50.947934  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:51.161193  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:51.320670  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:51.320931  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:51.447529  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:51.672034  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:51.823387  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:51.823949  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:51.949349  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:52.159584  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:52.321246  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:52.323112  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:52.450831  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:52.661759  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:52.819601  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:52.822812  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:52.948147  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:53.158260  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:53.320954  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:53.321416  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:53.447684  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:53.598745  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:53.658921  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:53.822095  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:53.822140  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:53.948027  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:54.159720  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:54.319139  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:54.323475  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:54.449052  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:54.659950  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:54.800080  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.201286682s)
	W1002 20:20:54.800158  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:54.800190  994709 retry.go:31] will retry after 36.238432013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:54.821361  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:54.822118  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:54.948234  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:55.160177  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:55.319580  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:55.323520  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:55.447562  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:55.659028  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:55.820055  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:55.822888  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:55.948043  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:56.160147  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:56.320399  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:56.322153  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:56.448568  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:56.662690  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:56.822552  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:56.822724  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:56.948654  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:57.165959  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:57.323611  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:57.324125  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:57.448839  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:57.659243  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:57.827311  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:57.827796  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:57.951325  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:58.160073  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:58.325194  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:58.325637  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:58.449778  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:58.663289  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:58.823656  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:58.824142  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:58.951729  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:59.159992  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:59.320856  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:59.322241  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:59.451389  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:59.659448  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:59.824351  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:59.824752  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:59.948244  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:00.178734  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:00.334811  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:00.335334  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:00.449977  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:00.660186  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:00.819874  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:00.820185  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:00.948376  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:01.159525  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:01.325608  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:01.326800  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:01.448685  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:01.660941  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:01.819636  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:01.822396  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:01.947837  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:02.160841  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:02.319889  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:02.323200  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:02.447592  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:02.663926  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:02.819507  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:02.822454  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:02.948180  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:03.158836  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:03.320854  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:03.322443  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:03.447975  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:03.658196  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:03.823965  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:03.824515  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:03.947809  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:04.160130  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:04.319792  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:04.320970  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:04.458399  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:04.659641  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:04.819337  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:04.821346  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:04.948487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:05.159402  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:05.318537  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:05.320782  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:05.447768  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:05.659047  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:05.820074  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:05.821224  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:05.948044  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:06.158918  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:06.319264  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:06.321170  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:06.448425  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:06.661071  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:06.819015  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:06.821112  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:06.948418  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:07.159287  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:07.320880  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:07.322732  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:07.448299  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:07.659089  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:07.833876  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:07.834240  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:07.948415  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:08.158976  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:08.320300  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:08.320874  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:08.448633  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:08.659076  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:08.820477  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:08.820621  994709 kapi.go:107] duration metric: took 1m24.50316116s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 20:21:08.948034  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:09.158956  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:09.319324  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:09.447625  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:09.660083  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:09.826440  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:09.949323  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:10.163992  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:10.320103  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:10.449195  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:10.658029  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:10.843087  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:10.948535  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:11.159397  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:11.319712  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:11.447769  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:11.659756  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:11.819109  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:11.947822  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:12.159549  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:12.319206  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:12.446918  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:12.658927  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:12.824411  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:12.947802  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:13.159449  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:13.318706  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:13.454138  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:13.658608  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:13.819013  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:13.948036  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:14.159253  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:14.319616  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:14.449075  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:14.662100  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:14.824454  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:14.950365  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:15.161131  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:15.319196  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:15.447530  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:15.663409  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:15.820874  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:15.953095  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:16.165487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:16.319583  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:16.448606  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:16.659953  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:16.819503  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:16.975219  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:17.158372  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:17.318879  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:17.448192  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:17.658937  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:17.820351  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:17.947275  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:18.158790  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:18.319421  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:18.447567  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:18.659923  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:18.822375  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:18.947862  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:19.159020  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:19.319073  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:19.447850  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:19.659710  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:19.818515  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:19.947671  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:20.160392  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:20.318657  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:20.448137  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:20.660115  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:20.819099  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:20.951129  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:21.160373  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:21.325467  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:21.449746  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:21.659955  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:21.819131  994709 kapi.go:107] duration metric: took 1m37.503635731s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 20:21:21.948370  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:22.158762  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:22.447738  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:22.658570  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:22.949101  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:23.158220  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:23.451919  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:23.658790  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:23.948375  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:24.159201  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:24.449117  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:24.659750  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:24.948295  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:25.160000  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:25.448116  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:25.658136  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:25.948058  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:26.158569  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:26.447775  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:26.658964  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:26.948377  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:27.159144  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:27.448069  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:27.658935  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:27.955751  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:28.159540  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:28.448912  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:28.662299  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:28.947885  994709 kapi.go:107] duration metric: took 1m41.503580566s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 20:21:28.951140  994709 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-693704 cluster.
	I1002 20:21:28.954142  994709 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 20:21:28.956995  994709 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 20:21:29.159855  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:29.664073  994709 kapi.go:107] duration metric: took 1m45.009034533s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 20:21:31.039676  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:21:31.852592  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:21:31.852690  994709 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:21:31.856656  994709 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1002 20:21:31.859688  994709 addons.go:514] duration metric: took 1m53.876564642s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin nvidia-device-plugin storage-provisioner ingress-dns metrics-server registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1002 20:21:31.859739  994709 start.go:247] waiting for cluster config update ...
	I1002 20:21:31.859761  994709 start.go:256] writing updated cluster config ...
	I1002 20:21:31.860060  994709 ssh_runner.go:195] Run: rm -f paused
	I1002 20:21:31.863547  994709 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:21:31.867571  994709 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4kbq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.872068  994709 pod_ready.go:94] pod "coredns-66bc5c9577-4kbq4" is "Ready"
	I1002 20:21:31.872092  994709 pod_ready.go:86] duration metric: took 4.493776ms for pod "coredns-66bc5c9577-4kbq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.874237  994709 pod_ready.go:83] waiting for pod "etcd-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.878256  994709 pod_ready.go:94] pod "etcd-addons-693704" is "Ready"
	I1002 20:21:31.878280  994709 pod_ready.go:86] duration metric: took 4.022961ms for pod "etcd-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.880276  994709 pod_ready.go:83] waiting for pod "kube-apiserver-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.885189  994709 pod_ready.go:94] pod "kube-apiserver-addons-693704" is "Ready"
	I1002 20:21:31.885218  994709 pod_ready.go:86] duration metric: took 4.915919ms for pod "kube-apiserver-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.887484  994709 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:32.267515  994709 pod_ready.go:94] pod "kube-controller-manager-addons-693704" is "Ready"
	I1002 20:21:32.267553  994709 pod_ready.go:86] duration metric: took 380.043461ms for pod "kube-controller-manager-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:32.468152  994709 pod_ready.go:83] waiting for pod "kube-proxy-gdxqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:32.869233  994709 pod_ready.go:94] pod "kube-proxy-gdxqs" is "Ready"
	I1002 20:21:32.869266  994709 pod_ready.go:86] duration metric: took 401.082172ms for pod "kube-proxy-gdxqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:33.067662  994709 pod_ready.go:83] waiting for pod "kube-scheduler-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:33.469284  994709 pod_ready.go:94] pod "kube-scheduler-addons-693704" is "Ready"
	I1002 20:21:33.469361  994709 pod_ready.go:86] duration metric: took 401.671243ms for pod "kube-scheduler-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:33.469380  994709 pod_ready.go:40] duration metric: took 1.605801066s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:21:33.530905  994709 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 20:21:33.534526  994709 out.go:179] * Done! kubectl is now configured to use "addons-693704" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 20:24:22 addons-693704 crio[828]: time="2025-10-02T20:24:22.153251471Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84 Namespace:local-path-storage ID:132d4bf582483728fbe1da9de28b37b53ca80083eb7b6a9a6059aebe5856633a UID:bf5e0fa0-b505-42e6-98e4-bbed23229c11 NetNS:/var/run/netns/f52e2632-63f0-4221-b5df-87894cfaabf6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001b235c0}] Aliases:map[]}"
	Oct 02 20:24:22 addons-693704 crio[828]: time="2025-10-02T20:24:22.153597103Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84 for CNI network kindnet (type=ptp)"
	Oct 02 20:24:22 addons-693704 crio[828]: time="2025-10-02T20:24:22.157274358Z" level=info msg="Ran pod sandbox 132d4bf582483728fbe1da9de28b37b53ca80083eb7b6a9a6059aebe5856633a with infra container: local-path-storage/helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84/POD" id=5ede4512-a8f4-4ead-84f8-176c2b2ecbbe name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 20:24:22 addons-693704 crio[828]: time="2025-10-02T20:24:22.159231657Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=2081ab0b-22cb-4452-96f2-b7895993fe93 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:24:22 addons-693704 crio[828]: time="2025-10-02T20:24:22.159391842Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=2081ab0b-22cb-4452-96f2-b7895993fe93 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:24:22 addons-693704 crio[828]: time="2025-10-02T20:24:22.159448251Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=2081ab0b-22cb-4452-96f2-b7895993fe93 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:24:32 addons-693704 crio[828]: time="2025-10-02T20:24:32.7715208Z" level=info msg="Stopping pod sandbox: 46d08202a04ab539d10fe2fbd2ac7e3a524f680b96d5417dfa728c0c162d9ab5" id=7575ccd5-7cb5-4e81-96ba-6fd5fd567fd2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 20:24:32 addons-693704 crio[828]: time="2025-10-02T20:24:32.771579744Z" level=info msg="Stopped pod sandbox (already stopped): 46d08202a04ab539d10fe2fbd2ac7e3a524f680b96d5417dfa728c0c162d9ab5" id=7575ccd5-7cb5-4e81-96ba-6fd5fd567fd2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 20:24:32 addons-693704 crio[828]: time="2025-10-02T20:24:32.772377869Z" level=info msg="Removing pod sandbox: 46d08202a04ab539d10fe2fbd2ac7e3a524f680b96d5417dfa728c0c162d9ab5" id=f0a36a73-df2f-4723-836c-24ab29e03b33 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 20:24:32 addons-693704 crio[828]: time="2025-10-02T20:24:32.776818025Z" level=info msg="Removed pod sandbox: 46d08202a04ab539d10fe2fbd2ac7e3a524f680b96d5417dfa728c0c162d9ab5" id=f0a36a73-df2f-4723-836c-24ab29e03b33 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 20:24:33 addons-693704 crio[828]: time="2025-10-02T20:24:33.183176382Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 02 20:25:03 addons-693704 crio[828]: time="2025-10-02T20:25:03.446159523Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=944fff5c-71e3-4e14-b4b7-444e23e9473e name=/runtime.v1.ImageService/PullImage
	Oct 02 20:25:03 addons-693704 crio[828]: time="2025-10-02T20:25:03.448421438Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Oct 02 20:25:35 addons-693704 crio[828]: time="2025-10-02T20:25:35.811561747Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Oct 02 20:26:06 addons-693704 crio[828]: time="2025-10-02T20:26:06.089875496Z" level=info msg="Pulling image: docker.io/nginx:latest" id=21014f01-c8f3-4d6b-82ef-0cc5114bd2d6 name=/runtime.v1.ImageService/PullImage
	Oct 02 20:26:06 addons-693704 crio[828]: time="2025-10-02T20:26:06.09236936Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 02 20:26:06 addons-693704 crio[828]: time="2025-10-02T20:26:06.596537996Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=f347cbd0-5725-4b75-b561-1dea00653a84 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:06 addons-693704 crio[828]: time="2025-10-02T20:26:06.596707116Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=f347cbd0-5725-4b75-b561-1dea00653a84 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:06 addons-693704 crio[828]: time="2025-10-02T20:26:06.596757445Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=f347cbd0-5725-4b75-b561-1dea00653a84 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:19 addons-693704 crio[828]: time="2025-10-02T20:26:19.747966118Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=cb7733cb-e2d6-4b44-a42a-d7e173bcc508 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:19 addons-693704 crio[828]: time="2025-10-02T20:26:19.748159705Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=cb7733cb-e2d6-4b44-a42a-d7e173bcc508 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:19 addons-693704 crio[828]: time="2025-10-02T20:26:19.748208131Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=cb7733cb-e2d6-4b44-a42a-d7e173bcc508 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:36 addons-693704 crio[828]: time="2025-10-02T20:26:36.373758435Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 02 20:27:06 addons-693704 crio[828]: time="2025-10-02T20:27:06.643610753Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=80ac12bc-c4b9-49ab-9f30-9bfc5d720786 name=/runtime.v1.ImageService/PullImage
	Oct 02 20:27:06 addons-693704 crio[828]: time="2025-10-02T20:27:06.646541053Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	0bc9f0d1b235e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          5 minutes ago       Running             busybox                                  0                   a4b1fc9c97e53       busybox                                    default
	6928dd54cd320       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          5 minutes ago       Running             csi-snapshotter                          0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	40761b95b2196       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 5 minutes ago       Running             gcp-auth                                 0                   9c1545073abea       gcp-auth-78565c9fb4-27djq                  gcp-auth
	8860f0e019516       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          5 minutes ago       Running             csi-provisioner                          0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	36c49020464e2       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            5 minutes ago       Running             liveness-probe                           0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	b7161126faae3       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           5 minutes ago       Running             hostpath                                 0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	b2b0003c8ca36       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             5 minutes ago       Running             controller                               0                   3a08c5d217c56       ingress-nginx-controller-9cc49f96f-9frwt   ingress-nginx
	2852575f20001       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:74b72c3673aff7e1fa7c3ebae80b5dbe5446ce1906ef8d4f98d4b9f6e72c88e1                            6 minutes ago       Running             gadget                                   0                   34878d06228a7       gadget-gljs2                               gadget
	ee97eb0b32c7f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                6 minutes ago       Running             node-driver-registrar                    0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	e42d2c0b7778e       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             6 minutes ago       Running             local-path-provisioner                   0                   b4f667a1ce299       local-path-provisioner-648f6765c9-v6khh    local-path-storage
	fc0714b2fd72f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              6 minutes ago       Running             registry-proxy                           0                   c8535afb414d5       registry-proxy-2kw45                       kube-system
	bca1297af7427       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   6 minutes ago       Exited              patch                                    0                   e925887ddf0d9       ingress-nginx-admission-patch-v6xpn        ingress-nginx
	627ce890f2b48       gcr.io/cloud-spanner-emulator/emulator@sha256:77d0cd8103fe32875bbb04c070a7d1db292093b65d11c99c00cf39e8a13852f5                               6 minutes ago       Running             cloud-spanner-emulator                   0                   49dda3c4634a4       cloud-spanner-emulator-85f6b7fc65-5wsmw    default
	16f4af5cddb75       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           6 minutes ago       Running             registry                                 0                   4bae41325f3f5       registry-66898fdd98-8rftt                  kube-system
	91fa943497ee5       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        6 minutes ago       Running             metrics-server                           0                   27cb63141e106       metrics-server-85b7d694d7-8pl6l            kube-system
	439510daf689e       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               6 minutes ago       Running             minikube-ingress-dns                     0                   e547aac4b280e       kube-ingress-dns-minikube                  kube-system
	063fa56393267       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              6 minutes ago       Running             csi-resizer                              0                   20ac69c0a7e28       csi-hostpath-resizer-0                     kube-system
	948a7498f368d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   6 minutes ago       Running             csi-external-health-monitor-controller   0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	bbd0c0fdbe948       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             6 minutes ago       Running             csi-attacher                             0                   e6f6a7809eb96       csi-hostpath-attacher-0                    kube-system
	697e9a6f92fb8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   6 minutes ago       Exited              create                                   0                   ec9abb5f653b7       ingress-nginx-admission-create-fndzf       ingress-nginx
	4a5b5d50e1426       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     6 minutes ago       Running             nvidia-device-plugin-ctr                 0                   ae9275c193e86       nvidia-device-plugin-daemonset-jblz6       kube-system
	4757a91ace2d4       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      6 minutes ago       Running             volume-snapshot-controller               0                   7cb6188e8093e       snapshot-controller-7d9fbc56b8-49h86       kube-system
	88520ea2c4ca7       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      6 minutes ago       Running             volume-snapshot-controller               0                   4de0d58fcc8d5       snapshot-controller-7d9fbc56b8-bw7rc       kube-system
	9390fd50f454e       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              6 minutes ago       Running             yakd                                     0                   a77b4648943e2       yakd-dashboard-5ff678cb9-b48gd             yakd-dashboard
	ec242b99be750       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             6 minutes ago       Running             coredns                                  0                   5e1993cbe5e41       coredns-66bc5c9577-4kbq4                   kube-system
	165a582582a89       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             6 minutes ago       Running             storage-provisioner                      0                   8b4b5f8349762       storage-provisioner                        kube-system
	cde8e7a8a028e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             7 minutes ago       Running             kindnet-cni                              0                   b1a33925c911a       kindnet-p9zvn                              kube-system
	0703880dcf265       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             7 minutes ago       Running             kube-proxy                               0                   18175bde14b29       kube-proxy-gdxqs                           kube-system
	972d6e9616c37       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             7 minutes ago       Running             etcd                                     0                   789f38c5890c2       etcd-addons-693704                         kube-system
	020148eb47c8c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             7 minutes ago       Running             kube-scheduler                           0                   3aa090880fcae       kube-scheduler-addons-693704               kube-system
	ab99c3bb8f644       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             7 minutes ago       Running             kube-controller-manager                  0                   629d2cf069469       kube-controller-manager-addons-693704      kube-system
	71c9ea9528918       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             7 minutes ago       Running             kube-apiserver                           0                   de4f0abfefce3       kube-apiserver-addons-693704               kube-system
	
	
	==> coredns [ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b] <==
	[INFO] 10.244.0.17:55859 - 34053 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.006721575s
	[INFO] 10.244.0.17:55859 - 46822 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000305001s
	[INFO] 10.244.0.17:55859 - 21325 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000282717s
	[INFO] 10.244.0.17:37045 - 20421 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000162088s
	[INFO] 10.244.0.17:37045 - 20651 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000128325s
	[INFO] 10.244.0.17:51048 - 61194 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000092519s
	[INFO] 10.244.0.17:51048 - 61672 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085027s
	[INFO] 10.244.0.17:57091 - 44872 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088334s
	[INFO] 10.244.0.17:57091 - 44684 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105589s
	[INFO] 10.244.0.17:59527 - 40959 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003459669s
	[INFO] 10.244.0.17:59527 - 41156 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003770241s
	[INFO] 10.244.0.17:59136 - 21305 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000142257s
	[INFO] 10.244.0.17:59136 - 21125 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093717s
	[INFO] 10.244.0.21:41484 - 12317 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000192315s
	[INFO] 10.244.0.21:60775 - 50484 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000142913s
	[INFO] 10.244.0.21:49862 - 44888 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127521s
	[INFO] 10.244.0.21:54840 - 52239 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149642s
	[INFO] 10.244.0.21:42560 - 6869 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000156624s
	[INFO] 10.244.0.21:41861 - 43315 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000298545s
	[INFO] 10.244.0.21:38412 - 8398 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002294645s
	[INFO] 10.244.0.21:40087 - 34579 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002408201s
	[INFO] 10.244.0.21:50163 - 3512 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.006805026s
	[INFO] 10.244.0.21:42501 - 46640 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.006618816s
	[INFO] 10.244.0.23:46061 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000191659s
	[INFO] 10.244.0.23:58330 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000122318s
	
	
	==> describe nodes <==
	Name:               addons-693704
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-693704
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=addons-693704
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_19_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-693704
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-693704"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:19:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-693704
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:27:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:27:11 +0000   Thu, 02 Oct 2025 20:19:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:27:11 +0000   Thu, 02 Oct 2025 20:19:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:27:11 +0000   Thu, 02 Oct 2025 20:19:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:27:11 +0000   Thu, 02 Oct 2025 20:20:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-693704
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 db645666b7ad4f1695da9df78e9fa367
	  System UUID:                021278b1-6d13-4d8b-91c7-a5de147567f7
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  default                     cloud-spanner-emulator-85f6b7fc65-5wsmw                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m35s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  gadget                      gadget-gljs2                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	  gcp-auth                    gcp-auth-78565c9fb4-27djq                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m29s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-9frwt                      100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         7m32s
	  kube-system                 coredns-66bc5c9577-4kbq4                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m38s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 csi-hostpathplugin-kkptd                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m57s
	  kube-system                 etcd-addons-693704                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7m45s
	  kube-system                 kindnet-p9zvn                                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m39s
	  kube-system                 kube-apiserver-addons-693704                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m44s
	  kube-system                 kube-controller-manager-addons-693704                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m43s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 kube-proxy-gdxqs                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m39s
	  kube-system                 kube-scheduler-addons-693704                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m43s
	  kube-system                 metrics-server-85b7d694d7-8pl6l                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         7m33s
	  kube-system                 nvidia-device-plugin-daemonset-jblz6                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m57s
	  kube-system                 registry-66898fdd98-8rftt                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	  kube-system                 registry-creds-764b6fb674-6cg6b                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m35s
	  kube-system                 registry-proxy-2kw45                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m57s
	  kube-system                 snapshot-controller-7d9fbc56b8-49h86                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 snapshot-controller-7d9fbc56b8-bw7rc                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	  local-path-storage          helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  local-path-storage          local-path-provisioner-648f6765c9-v6khh                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-b48gd                                0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     7m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m36s                  kube-proxy       
	  Normal   Starting                 7m50s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m50s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m50s (x8 over 7m50s)  kubelet          Node addons-693704 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m50s (x8 over 7m50s)  kubelet          Node addons-693704 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m50s (x8 over 7m50s)  kubelet          Node addons-693704 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m44s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m44s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m44s                  kubelet          Node addons-693704 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m44s                  kubelet          Node addons-693704 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m44s                  kubelet          Node addons-693704 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m40s                  node-controller  Node addons-693704 event: Registered Node addons-693704 in Controller
	  Normal   NodeReady                6m57s                  kubelet          Node addons-693704 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 19:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 20:17] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 20:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3] <==
	{"level":"warn","ts":"2025-10-02T20:19:28.781544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.806892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.814167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.836647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.852657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.878105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.886646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.904572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.925806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.935913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.956578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.971517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.993677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.031509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.041915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.068902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.157895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:45.092047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:45.118929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:06.895880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:06.909631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:07.000732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:07.017116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49460","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:20:36.364046Z","caller":"traceutil/trace.go:172","msg":"trace[1063042819] transaction","detail":"{read_only:false; response_revision:1036; number_of_response:1; }","duration":"113.56953ms","start":"2025-10-02T20:20:36.250465Z","end":"2025-10-02T20:20:36.364035Z","steps":["trace[1063042819] 'process raft request'  (duration: 56.881349ms)","trace[1063042819] 'compare'  (duration: 56.419938ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T20:20:36.365279Z","caller":"traceutil/trace.go:172","msg":"trace[29069078] transaction","detail":"{read_only:false; response_revision:1037; number_of_response:1; }","duration":"104.71736ms","start":"2025-10-02T20:20:36.259205Z","end":"2025-10-02T20:20:36.363922Z","steps":["trace[29069078] 'process raft request'  (duration: 104.653649ms)"],"step_count":1}
	
	
	==> gcp-auth [40761b95b219669fa13be3f37e9874311bcd42514e92101fcec6f883bf46c837] <==
	2025/10/02 20:21:27 GCP Auth Webhook started!
	2025/10/02 20:21:34 Ready to marshal response ...
	2025/10/02 20:21:34 Ready to write response ...
	2025/10/02 20:21:34 Ready to marshal response ...
	2025/10/02 20:21:34 Ready to write response ...
	2025/10/02 20:21:34 Ready to marshal response ...
	2025/10/02 20:21:34 Ready to write response ...
	2025/10/02 20:21:55 Ready to marshal response ...
	2025/10/02 20:21:55 Ready to write response ...
	2025/10/02 20:21:59 Ready to marshal response ...
	2025/10/02 20:21:59 Ready to write response ...
	2025/10/02 20:22:06 Ready to marshal response ...
	2025/10/02 20:22:06 Ready to write response ...
	2025/10/02 20:22:06 Ready to marshal response ...
	2025/10/02 20:22:06 Ready to write response ...
	2025/10/02 20:24:21 Ready to marshal response ...
	2025/10/02 20:24:21 Ready to write response ...
	2025/10/02 20:26:52 Ready to marshal response ...
	2025/10/02 20:26:52 Ready to write response ...
	
	
	==> kernel <==
	 20:27:16 up  5:09,  0 user,  load average: 1.32, 1.52, 2.57
	Linux addons-693704 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0] <==
	I1002 20:25:08.912320       1 main.go:301] handling current node
	I1002 20:25:18.907634       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:25:18.907669       1 main.go:301] handling current node
	I1002 20:25:28.914475       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:25:28.914591       1 main.go:301] handling current node
	I1002 20:25:38.908394       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:25:38.908433       1 main.go:301] handling current node
	I1002 20:25:48.907613       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:25:48.907646       1 main.go:301] handling current node
	I1002 20:25:58.910119       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:25:58.910161       1 main.go:301] handling current node
	I1002 20:26:08.907658       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:08.907789       1 main.go:301] handling current node
	I1002 20:26:18.914211       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:18.914253       1 main.go:301] handling current node
	I1002 20:26:28.911384       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:28.911494       1 main.go:301] handling current node
	I1002 20:26:38.914130       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:38.914165       1 main.go:301] handling current node
	I1002 20:26:48.907640       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:48.907681       1 main.go:301] handling current node
	I1002 20:26:58.908673       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:58.908721       1 main.go:301] handling current node
	I1002 20:27:08.914122       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:27:08.914154       1 main.go:301] handling current node
	
	
	==> kube-apiserver [71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba] <==
	I1002 20:20:43.745558       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 20:21:08.431186       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:21:08.431257       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1002 20:21:08.431339       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.13.34:443: connect: connection refused" logger="UnhandledError"
	E1002 20:21:08.433865       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.13.34:443: connect: connection refused" logger="UnhandledError"
	W1002 20:21:09.431415       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:21:09.431457       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 20:21:09.431472       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 20:21:09.431507       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:21:09.431564       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 20:21:09.432661       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 20:21:13.450452       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:21:13.450503       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1002 20:21:13.450794       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1002 20:21:13.499856       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1002 20:21:44.668705       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43290: use of closed network connection
	
	
	==> kube-controller-manager [ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c] <==
	I1002 20:19:36.927821       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 20:19:36.927907       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-693704"
	I1002 20:19:36.927948       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 20:19:36.927971       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 20:19:36.929043       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 20:19:36.929089       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 20:19:36.929104       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 20:19:36.929196       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 20:19:36.929242       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 20:19:36.930939       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 20:19:36.953633       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 20:19:36.957922       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 20:19:42.958900       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 20:20:06.887630       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 20:20:06.887888       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1002 20:20:06.887954       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 20:20:06.966287       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 20:20:06.978573       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 20:20:06.989795       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:20:07.080038       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:20:21.939957       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1002 20:20:36.994429       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 20:20:37.091221       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1002 20:21:07.000284       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 20:21:07.098427       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1] <==
	I1002 20:19:38.989384       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:19:39.087738       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:19:39.188580       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:19:39.188619       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:19:39.188702       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:19:39.263259       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:19:39.267990       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:19:39.278942       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:19:39.279269       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:19:39.279289       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:19:39.289355       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:19:39.289374       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:19:39.289655       1 config.go:200] "Starting service config controller"
	I1002 20:19:39.289662       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:19:39.289995       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:19:39.290002       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:19:39.290636       1 config.go:309] "Starting node config controller"
	I1002 20:19:39.290645       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:19:39.290651       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:19:39.390091       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 20:19:39.390138       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:19:39.390179       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251] <==
	E1002 20:19:30.082976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 20:19:30.083025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 20:19:30.083075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:19:30.083123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 20:19:30.083172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 20:19:30.083221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 20:19:30.083269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:19:30.083318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:19:30.083367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:19:30.083415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:19:30.083460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:19:30.083513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 20:19:30.083555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 20:19:30.083651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 20:19:30.083692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 20:19:30.083739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:19:30.086243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 20:19:30.905348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 20:19:30.932288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:19:30.964617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 20:19:30.984039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 20:19:31.017892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:19:31.036527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:19:31.063255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1002 20:19:31.603691       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 20:24:37 addons-693704 kubelet[1282]: E1002 20:24:37.747385    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-6cg6b" podUID="d16ac5e8-a382-4faa-85dc-039ac18fa4cf"
	Oct 02 20:24:38 addons-693704 kubelet[1282]: I1002 20:24:38.746794    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jblz6" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:25:03 addons-693704 kubelet[1282]: E1002 20:25:03.445393    1282 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:25:03 addons-693704 kubelet[1282]: E1002 20:25:03.445458    1282 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:25:03 addons-693704 kubelet[1282]: E1002 20:25:03.445651    1282 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(0a97c0d4-0277-4225-81aa-39349ced9b52): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:25:03 addons-693704 kubelet[1282]: E1002 20:25:03.445694    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0a97c0d4-0277-4225-81aa-39349ced9b52"
	Oct 02 20:25:18 addons-693704 kubelet[1282]: E1002 20:25:18.747469    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0a97c0d4-0277-4225-81aa-39349ced9b52"
	Oct 02 20:25:22 addons-693704 kubelet[1282]: I1002 20:25:22.747859    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-8rftt" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:25:39 addons-693704 kubelet[1282]: I1002 20:25:39.747004    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2kw45" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:25:41 addons-693704 kubelet[1282]: I1002 20:25:41.747057    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jblz6" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:26:06 addons-693704 kubelet[1282]: E1002 20:26:06.089045    1282 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: reading manifest sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:26:06 addons-693704 kubelet[1282]: E1002 20:26:06.089147    1282 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: reading manifest sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:26:06 addons-693704 kubelet[1282]: E1002 20:26:06.089357    1282 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84_local-path-storage(bf5e0fa0-b505-42e6-98e4-bbed23229c11): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: reading manifest sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com
/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:26:06 addons-693704 kubelet[1282]: E1002 20:26:06.089403    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: reading manifest sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-58930
23c8c84" podUID="bf5e0fa0-b505-42e6-98e4-bbed23229c11"
	Oct 02 20:26:06 addons-693704 kubelet[1282]: E1002 20:26:06.597075    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: reading manifest sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthe
nticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84" podUID="bf5e0fa0-b505-42e6-98e4-bbed23229c11"
	Oct 02 20:26:31 addons-693704 kubelet[1282]: E1002 20:26:31.546511    1282 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 02 20:26:31 addons-693704 kubelet[1282]: E1002 20:26:31.546606    1282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d16ac5e8-a382-4faa-85dc-039ac18fa4cf-gcr-creds podName:d16ac5e8-a382-4faa-85dc-039ac18fa4cf nodeName:}" failed. No retries permitted until 2025-10-02 20:28:33.546586929 +0000 UTC m=+540.956719904 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/d16ac5e8-a382-4faa-85dc-039ac18fa4cf-gcr-creds") pod "registry-creds-764b6fb674-6cg6b" (UID: "d16ac5e8-a382-4faa-85dc-039ac18fa4cf") : secret "registry-creds-gcr" not found
	Oct 02 20:26:38 addons-693704 kubelet[1282]: I1002 20:26:38.746728    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-8rftt" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:26:47 addons-693704 kubelet[1282]: I1002 20:26:47.747118    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2kw45" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:26:52 addons-693704 kubelet[1282]: E1002 20:26:52.747795    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-6cg6b" podUID="d16ac5e8-a382-4faa-85dc-039ac18fa4cf"
	Oct 02 20:27:06 addons-693704 kubelet[1282]: E1002 20:27:06.641735    1282 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:27:06 addons-693704 kubelet[1282]: E1002 20:27:06.641811    1282 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:27:06 addons-693704 kubelet[1282]: E1002 20:27:06.642025    1282 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(0a97c0d4-0277-4225-81aa-39349ced9b52): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:27:06 addons-693704 kubelet[1282]: E1002 20:27:06.642096    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0a97c0d4-0277-4225-81aa-39349ced9b52"
	Oct 02 20:27:09 addons-693704 kubelet[1282]: I1002 20:27:09.746645    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jblz6" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa] <==
	W1002 20:26:50.610685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:52.613668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:52.618154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:54.621370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:54.625844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:56.629514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:56.636324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:58.639418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:58.644028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:00.647127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:00.651440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:02.654951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:02.659581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:04.662127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:04.666292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:06.676513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:06.682250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:08.685921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:08.692885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:10.696641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:10.700836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:12.704214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:12.710613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:14.715291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:14.721411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-693704 -n addons-693704
helpers_test.go:269: (dbg) Run:  kubectl --context addons-693704 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn registry-creds-764b6fb674-6cg6b helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-693704 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn registry-creds-764b6fb674-6cg6b helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-693704 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn registry-creds-764b6fb674-6cg6b helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84: exit status 1 (104.934082ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-693704/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:21:59 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-78xtg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-78xtg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m18s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-693704
	  Warning  Failed     4m15s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    119s (x2 over 4m15s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     119s (x2 over 4m15s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    107s (x3 over 5m18s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     11s (x3 over 4m15s)   kubelet            Error: ErrImagePull
	  Warning  Failed     11s (x2 over 2m14s)   kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t66j5 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-t66j5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fndzf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-v6xpn" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-6cg6b" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-693704 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn registry-creds-764b6fb674-6cg6b helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-693704 addons disable headlamp --alsologtostderr -v=1: exit status 11 (267.300852ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:27:17.232968 1004648 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:27:17.233822 1004648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:17.233863 1004648 out.go:374] Setting ErrFile to fd 2...
	I1002 20:27:17.233883 1004648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:17.234221 1004648 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:27:17.234556 1004648 mustload.go:65] Loading cluster: addons-693704
	I1002 20:27:17.234970 1004648 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:27:17.235015 1004648 addons.go:606] checking whether the cluster is paused
	I1002 20:27:17.235147 1004648 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:27:17.235180 1004648 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:27:17.235663 1004648 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:27:17.258115 1004648 ssh_runner.go:195] Run: systemctl --version
	I1002 20:27:17.258184 1004648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:27:17.277262 1004648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:27:17.372653 1004648 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:27:17.372754 1004648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:27:17.404565 1004648 cri.go:89] found id: "6928dd54cd320350a0574bce9e99e262d52ef2ad32927799eb1e85f4f8e93f52"
	I1002 20:27:17.404588 1004648 cri.go:89] found id: "8860f0e019516ca00a020669c8250eb699147392f2fd0ab332eef75f31694ae5"
	I1002 20:27:17.404593 1004648 cri.go:89] found id: "36c49020464e244780926156984b457b4f46db8eacc0cd8fc4a9c835da280358"
	I1002 20:27:17.404597 1004648 cri.go:89] found id: "b7161126faae3543c45089c1c2a743e531e0d2ec905126b94b00abc8b3f4e86b"
	I1002 20:27:17.404605 1004648 cri.go:89] found id: "ee97eb0b32c7fe49fe1575e1d85aaabcb7c2a4c12bc8d6e96ec8ec1ace15eb83"
	I1002 20:27:17.404610 1004648 cri.go:89] found id: "fc0714b2fd72f680a79055bd40db196753ebd675e00b57fdbbb1311e6f52f0c6"
	I1002 20:27:17.404613 1004648 cri.go:89] found id: "16f4af5cddb751e7f70cd54a1b075a0e52070ec2ab609d7dd4b6382625d77f3c"
	I1002 20:27:17.404616 1004648 cri.go:89] found id: "91fa943497ee5a780fabd8bb6f6a4c48256ff45ea9d2abbfd6fe7173c099a981"
	I1002 20:27:17.404619 1004648 cri.go:89] found id: "439510daf689e871d24092dcfc8fc87def85821aa672fe72a12cd5370bc3106f"
	I1002 20:27:17.404625 1004648 cri.go:89] found id: "063fa56393267278e05822086a20d260f8a6eab5ccf9daa3feabae6e691670f2"
	I1002 20:27:17.404628 1004648 cri.go:89] found id: "948a7498f368d7c9cddb6833384a3cb174bb78753aa09668ec42b4d2860ebf4e"
	I1002 20:27:17.404632 1004648 cri.go:89] found id: "bbd0c0fdbe9489f01321a2881ee0a43621fde9603022a8e6e320b98adde72078"
	I1002 20:27:17.404635 1004648 cri.go:89] found id: "4a5b5d50e1426c0038720273cc7b8abbf6622876cd1ba10fda5cf36d7b50e6ab"
	I1002 20:27:17.404638 1004648 cri.go:89] found id: "4757a91ace2d4f7cb4fd077cfa0e537a6b0a1a55cfca1ddd7afa9cdee5de2849"
	I1002 20:27:17.404641 1004648 cri.go:89] found id: "88520ea2c4ca73535bd06329fc29000af61986adf6366f19c9a796b83180d153"
	I1002 20:27:17.404646 1004648 cri.go:89] found id: "ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b"
	I1002 20:27:17.404650 1004648 cri.go:89] found id: "165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa"
	I1002 20:27:17.404655 1004648 cri.go:89] found id: "cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0"
	I1002 20:27:17.404658 1004648 cri.go:89] found id: "0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1"
	I1002 20:27:17.404661 1004648 cri.go:89] found id: "972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3"
	I1002 20:27:17.404666 1004648 cri.go:89] found id: "020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251"
	I1002 20:27:17.404673 1004648 cri.go:89] found id: "ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c"
	I1002 20:27:17.404677 1004648 cri.go:89] found id: "71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba"
	I1002 20:27:17.404680 1004648 cri.go:89] found id: ""
	I1002 20:27:17.404735 1004648 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 20:27:17.419534 1004648 out.go:203] 
	W1002 20:27:17.422353 1004648 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:27:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:27:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 20:27:17.422375 1004648 out.go:285] * 
	* 
	W1002 20:27:17.430049 1004648 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:27:17.432958 1004648 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-693704 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.00s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-5wsmw" [f5576e70-cbb2-46b0-a8b7-56055e616959] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003521648s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-693704 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (266.230185ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:27:14.230629 1004150 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:27:14.231389 1004150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:14.231411 1004150 out.go:374] Setting ErrFile to fd 2...
	I1002 20:27:14.231423 1004150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:14.231829 1004150 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:27:14.232166 1004150 mustload.go:65] Loading cluster: addons-693704
	I1002 20:27:14.232578 1004150 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:27:14.232603 1004150 addons.go:606] checking whether the cluster is paused
	I1002 20:27:14.232754 1004150 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:27:14.232790 1004150 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:27:14.233378 1004150 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:27:14.250722 1004150 ssh_runner.go:195] Run: systemctl --version
	I1002 20:27:14.250774 1004150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:27:14.267703 1004150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:27:14.373007 1004150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:27:14.373144 1004150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:27:14.402026 1004150 cri.go:89] found id: "6928dd54cd320350a0574bce9e99e262d52ef2ad32927799eb1e85f4f8e93f52"
	I1002 20:27:14.402133 1004150 cri.go:89] found id: "8860f0e019516ca00a020669c8250eb699147392f2fd0ab332eef75f31694ae5"
	I1002 20:27:14.402145 1004150 cri.go:89] found id: "36c49020464e244780926156984b457b4f46db8eacc0cd8fc4a9c835da280358"
	I1002 20:27:14.402149 1004150 cri.go:89] found id: "b7161126faae3543c45089c1c2a743e531e0d2ec905126b94b00abc8b3f4e86b"
	I1002 20:27:14.402153 1004150 cri.go:89] found id: "ee97eb0b32c7fe49fe1575e1d85aaabcb7c2a4c12bc8d6e96ec8ec1ace15eb83"
	I1002 20:27:14.402157 1004150 cri.go:89] found id: "fc0714b2fd72f680a79055bd40db196753ebd675e00b57fdbbb1311e6f52f0c6"
	I1002 20:27:14.402160 1004150 cri.go:89] found id: "16f4af5cddb751e7f70cd54a1b075a0e52070ec2ab609d7dd4b6382625d77f3c"
	I1002 20:27:14.402163 1004150 cri.go:89] found id: "91fa943497ee5a780fabd8bb6f6a4c48256ff45ea9d2abbfd6fe7173c099a981"
	I1002 20:27:14.402166 1004150 cri.go:89] found id: "439510daf689e871d24092dcfc8fc87def85821aa672fe72a12cd5370bc3106f"
	I1002 20:27:14.402173 1004150 cri.go:89] found id: "063fa56393267278e05822086a20d260f8a6eab5ccf9daa3feabae6e691670f2"
	I1002 20:27:14.402176 1004150 cri.go:89] found id: "948a7498f368d7c9cddb6833384a3cb174bb78753aa09668ec42b4d2860ebf4e"
	I1002 20:27:14.402180 1004150 cri.go:89] found id: "bbd0c0fdbe9489f01321a2881ee0a43621fde9603022a8e6e320b98adde72078"
	I1002 20:27:14.402183 1004150 cri.go:89] found id: "4a5b5d50e1426c0038720273cc7b8abbf6622876cd1ba10fda5cf36d7b50e6ab"
	I1002 20:27:14.402187 1004150 cri.go:89] found id: "4757a91ace2d4f7cb4fd077cfa0e537a6b0a1a55cfca1ddd7afa9cdee5de2849"
	I1002 20:27:14.402191 1004150 cri.go:89] found id: "88520ea2c4ca73535bd06329fc29000af61986adf6366f19c9a796b83180d153"
	I1002 20:27:14.402196 1004150 cri.go:89] found id: "ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b"
	I1002 20:27:14.402204 1004150 cri.go:89] found id: "165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa"
	I1002 20:27:14.402208 1004150 cri.go:89] found id: "cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0"
	I1002 20:27:14.402211 1004150 cri.go:89] found id: "0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1"
	I1002 20:27:14.402214 1004150 cri.go:89] found id: "972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3"
	I1002 20:27:14.402219 1004150 cri.go:89] found id: "020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251"
	I1002 20:27:14.402222 1004150 cri.go:89] found id: "ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c"
	I1002 20:27:14.402225 1004150 cri.go:89] found id: "71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba"
	I1002 20:27:14.402227 1004150 cri.go:89] found id: ""
	I1002 20:27:14.402279 1004150 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 20:27:14.422115 1004150 out.go:203] 
	W1002 20:27:14.424840 1004150 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:27:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:27:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 20:27:14.424866 1004150 out.go:285] * 
	* 
	W1002 20:27:14.432722 1004150 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:27:14.435555 1004150 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-693704 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (303.29s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-693704 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-693704 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-693704 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-693704
helpers_test.go:243: (dbg) docker inspect addons-693704:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277",
	        "Created": "2025-10-02T20:19:07.144298893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 995109,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:19:07.216699876Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/hostname",
	        "HostsPath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/hosts",
	        "LogPath": "/var/lib/docker/containers/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277/d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277-json.log",
	        "Name": "/addons-693704",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-693704:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-693704",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d39c48e9924533b90d84191a9fa8b90846d32b8c1aab3a4ab639f652d3f84277",
	                "LowerDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997/merged",
	                "UpperDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997/diff",
	                "WorkDir": "/var/lib/docker/overlay2/580c941f6b9ccfd65a5db2e173f3e3860bd4279fdb3e505e1ce19cb45cd9d997/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-693704",
	                "Source": "/var/lib/docker/volumes/addons-693704/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-693704",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-693704",
	                "name.minikube.sigs.k8s.io": "addons-693704",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab8175306a77dcd2868d77b0652aff78896362c7258aefc47fe7a07059e18c86",
	            "SandboxKey": "/var/run/docker/netns/ab8175306a77",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-693704": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:98:f0:2f:5f:be",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b2b7a73ec267c22f9c2a0b05d90a02bfb26f74cfccf22ef9af628da6d1b040f0",
	                    "EndpointID": "a29bf68bc8126d88282105e99c5ad7822f95d3abd8c683fc3272ac8e0ad9c3f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-693704",
	                        "d39c48e99245"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-693704 -n addons-693704
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-693704 logs -n 25: (1.468108115s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-926391 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-926391   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ delete  │ -p download-only-926391                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-926391   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ start   │ -o=json --download-only -p download-only-569491 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-569491   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ delete  │ -p download-only-569491                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-569491   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ delete  │ -p download-only-926391                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-926391   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ delete  │ -p download-only-569491                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-569491   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ start   │ --download-only -p download-docker-496636 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-496636 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ -p download-docker-496636                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-496636 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ start   │ --download-only -p binary-mirror-261948 --alsologtostderr --binary-mirror http://127.0.0.1:38235 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-261948   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ -p binary-mirror-261948                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-261948   │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ addons  │ disable dashboard -p addons-693704                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ addons  │ enable dashboard -p addons-693704                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ start   │ -p addons-693704 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:21 UTC │
	│ addons  │ addons-693704 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │                     │
	│ addons  │ addons-693704 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │                     │
	│ addons  │ addons-693704 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │                     │
	│ ip      │ addons-693704 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:21 UTC │ 02 Oct 25 20:21 UTC │
	│ addons  │ addons-693704 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ addons  │ addons-693704 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-693704          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:18:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:18:42.587429  994709 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:18:42.587660  994709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:42.587694  994709 out.go:374] Setting ErrFile to fd 2...
	I1002 20:18:42.587713  994709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:42.588005  994709 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:18:42.588496  994709 out.go:368] Setting JSON to false
	I1002 20:18:42.589377  994709 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18060,"bootTime":1759418263,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 20:18:42.589480  994709 start.go:140] virtualization:  
	I1002 20:18:42.592863  994709 out.go:179] * [addons-693704] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:18:42.596651  994709 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:18:42.596802  994709 notify.go:221] Checking for updates...
	I1002 20:18:42.602490  994709 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:18:42.605403  994709 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:18:42.608387  994709 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 20:18:42.611210  994709 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:18:42.614017  994709 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:18:42.617196  994709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:18:42.641430  994709 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:18:42.641548  994709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:42.702297  994709 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-02 20:18:42.693145863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:18:42.702404  994709 docker.go:319] overlay module found
	I1002 20:18:42.705389  994709 out.go:179] * Using the docker driver based on user configuration
	I1002 20:18:42.708231  994709 start.go:306] selected driver: docker
	I1002 20:18:42.708247  994709 start.go:936] validating driver "docker" against <nil>
	I1002 20:18:42.708259  994709 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:18:42.708953  994709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:42.762696  994709 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-02 20:18:42.753788413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:18:42.762850  994709 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:18:42.763087  994709 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:18:42.766074  994709 out.go:179] * Using Docker driver with root privileges
	I1002 20:18:42.768763  994709 cni.go:84] Creating CNI manager for ""
	I1002 20:18:42.768836  994709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:18:42.768849  994709 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:18:42.768919  994709 start.go:350] cluster config:
	{Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1002 20:18:42.771909  994709 out.go:179] * Starting "addons-693704" primary control-plane node in "addons-693704" cluster
	I1002 20:18:42.774712  994709 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:18:42.777590  994709 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:18:42.780428  994709 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:42.780455  994709 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:18:42.780491  994709 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 20:18:42.780500  994709 cache.go:59] Caching tarball of preloaded images
	I1002 20:18:42.780575  994709 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 20:18:42.780584  994709 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:18:42.780914  994709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/config.json ...
	I1002 20:18:42.780943  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/config.json: {Name:mkd60ee77440eccb122eacb378637e77c2fde5a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:18:42.795665  994709 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:18:42.795798  994709 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 20:18:42.795824  994709 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 20:18:42.795836  994709 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 20:18:42.795846  994709 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 20:18:42.795852  994709 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 20:19:00.985065  994709 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 20:19:00.985108  994709 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:19:00.985137  994709 start.go:361] acquireMachinesLock for addons-693704: {Name:mkeb9eb5752430ab2d33310b44640ce93b8d2df4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:19:00.985263  994709 start.go:365] duration metric: took 102.298µs to acquireMachinesLock for "addons-693704"
	I1002 20:19:00.985295  994709 start.go:94] Provisioning new machine with config: &{Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:19:00.985372  994709 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:19:00.988832  994709 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 20:19:00.989104  994709 start.go:160] libmachine.API.Create for "addons-693704" (driver="docker")
	I1002 20:19:00.989159  994709 client.go:168] LocalClient.Create starting
	I1002 20:19:00.989296  994709 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem
	I1002 20:19:01.433837  994709 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem
	I1002 20:19:01.564238  994709 cli_runner.go:164] Run: docker network inspect addons-693704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:19:01.580044  994709 cli_runner.go:211] docker network inspect addons-693704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:19:01.580136  994709 network_create.go:284] running [docker network inspect addons-693704] to gather additional debugging logs...
	I1002 20:19:01.580158  994709 cli_runner.go:164] Run: docker network inspect addons-693704
	W1002 20:19:01.596534  994709 cli_runner.go:211] docker network inspect addons-693704 returned with exit code 1
	I1002 20:19:01.596569  994709 network_create.go:287] error running [docker network inspect addons-693704]: docker network inspect addons-693704: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-693704 not found
	I1002 20:19:01.596590  994709 network_create.go:289] output of [docker network inspect addons-693704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-693704 not found
	
	** /stderr **
	I1002 20:19:01.596688  994709 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:19:01.612608  994709 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018f17c0}
	I1002 20:19:01.612647  994709 network_create.go:124] attempt to create docker network addons-693704 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:19:01.612711  994709 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-693704 addons-693704
	I1002 20:19:01.677264  994709 network_create.go:108] docker network addons-693704 192.168.49.0/24 created
	I1002 20:19:01.677303  994709 kic.go:121] calculated static IP "192.168.49.2" for the "addons-693704" container
	I1002 20:19:01.677378  994709 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:19:01.693107  994709 cli_runner.go:164] Run: docker volume create addons-693704 --label name.minikube.sigs.k8s.io=addons-693704 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:19:01.711600  994709 oci.go:103] Successfully created a docker volume addons-693704
	I1002 20:19:01.711704  994709 cli_runner.go:164] Run: docker run --rm --name addons-693704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-693704 --entrypoint /usr/bin/test -v addons-693704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:19:02.731832  994709 cli_runner.go:217] Completed: docker run --rm --name addons-693704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-693704 --entrypoint /usr/bin/test -v addons-693704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (1.020058685s)
	I1002 20:19:02.731865  994709 oci.go:107] Successfully prepared a docker volume addons-693704
	I1002 20:19:02.731897  994709 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:19:02.731915  994709 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:19:02.731979  994709 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-693704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:19:07.072259  994709 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-693704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.340238594s)
	I1002 20:19:07.072312  994709 kic.go:203] duration metric: took 4.340372991s to extract preloaded images to volume ...
	W1002 20:19:07.072445  994709 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 20:19:07.072554  994709 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:19:07.131614  994709 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-693704 --name addons-693704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-693704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-693704 --network addons-693704 --ip 192.168.49.2 --volume addons-693704:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:19:07.425756  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Running}}
	I1002 20:19:07.450427  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:07.471353  994709 cli_runner.go:164] Run: docker exec addons-693704 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:19:07.519322  994709 oci.go:144] the created container "addons-693704" has a running status.
	I1002 20:19:07.519348  994709 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa...
	I1002 20:19:07.874970  994709 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:19:07.902253  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:07.924631  994709 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:19:07.924649  994709 kic_runner.go:114] Args: [docker exec --privileged addons-693704 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:19:07.982879  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:08.009002  994709 machine.go:93] provisionDockerMachine start ...
	I1002 20:19:08.009096  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:08.026925  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:08.027256  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:08.027273  994709 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:19:08.027902  994709 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 20:19:11.161848  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-693704
	
	I1002 20:19:11.161874  994709 ubuntu.go:182] provisioning hostname "addons-693704"
	I1002 20:19:11.161998  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.180011  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.180318  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:11.180334  994709 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-693704 && echo "addons-693704" | sudo tee /etc/hostname
	I1002 20:19:11.318599  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-693704
	
	I1002 20:19:11.318673  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.334766  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.335074  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:11.335095  994709 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-693704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-693704/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-693704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:19:11.466309  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:19:11.466378  994709 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 20:19:11.466405  994709 ubuntu.go:190] setting up certificates
	I1002 20:19:11.466416  994709 provision.go:84] configureAuth start
	I1002 20:19:11.466491  994709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-693704
	I1002 20:19:11.484411  994709 provision.go:143] copyHostCerts
	I1002 20:19:11.484497  994709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 20:19:11.484648  994709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 20:19:11.484708  994709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 20:19:11.484757  994709 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.addons-693704 san=[127.0.0.1 192.168.49.2 addons-693704 localhost minikube]
	I1002 20:19:11.600457  994709 provision.go:177] copyRemoteCerts
	I1002 20:19:11.600526  994709 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:19:11.600571  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.617715  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:11.713831  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 20:19:11.731711  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 20:19:11.748544  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:19:11.765398  994709 provision.go:87] duration metric: took 298.94846ms to configureAuth
	I1002 20:19:11.765428  994709 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:19:11.765610  994709 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:19:11.765720  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:11.782571  994709 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.782895  994709 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I1002 20:19:11.782917  994709 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:19:12.024388  994709 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:19:12.024409  994709 machine.go:96] duration metric: took 4.015387209s to provisionDockerMachine
	I1002 20:19:12.024420  994709 client.go:171] duration metric: took 11.035249443s to LocalClient.Create
	I1002 20:19:12.024430  994709 start.go:168] duration metric: took 11.035328481s to libmachine.API.Create "addons-693704"
	I1002 20:19:12.024438  994709 start.go:294] postStartSetup for "addons-693704" (driver="docker")
	I1002 20:19:12.024448  994709 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:19:12.024531  994709 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:19:12.024581  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.046435  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.145575  994709 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:19:12.148535  994709 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:19:12.148564  994709 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:19:12.148574  994709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 20:19:12.148638  994709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 20:19:12.148666  994709 start.go:297] duration metric: took 124.222688ms for postStartSetup
	I1002 20:19:12.148981  994709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-693704
	I1002 20:19:12.164538  994709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/config.json ...
	I1002 20:19:12.164807  994709 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:19:12.164866  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.181186  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.274914  994709 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:19:12.279510  994709 start.go:129] duration metric: took 11.294122752s to createHost
	I1002 20:19:12.279576  994709 start.go:84] releasing machines lock for "addons-693704", held for 11.294297786s
	I1002 20:19:12.279683  994709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-693704
	I1002 20:19:12.298232  994709 ssh_runner.go:195] Run: cat /version.json
	I1002 20:19:12.298284  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.298302  994709 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:19:12.298368  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:12.327555  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.332727  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:12.506484  994709 ssh_runner.go:195] Run: systemctl --version
	I1002 20:19:12.512752  994709 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:19:12.553418  994709 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:19:12.557546  994709 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:19:12.557619  994709 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:19:12.586608  994709 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 20:19:12.586633  994709 start.go:496] detecting cgroup driver to use...
	I1002 20:19:12.586667  994709 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:19:12.586718  994709 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:19:12.605523  994709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:19:12.618955  994709 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:19:12.619019  994709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:19:12.636190  994709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:19:12.655245  994709 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:19:12.773294  994709 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:19:12.899674  994709 docker.go:234] disabling docker service ...
	I1002 20:19:12.899796  994709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:19:12.921306  994709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:19:12.935583  994709 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:19:13.058429  994709 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:19:13.191274  994709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:19:13.203980  994709 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:19:13.218083  994709 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:19:13.218172  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.227208  994709 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 20:19:13.227310  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.236115  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.244683  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.253282  994709 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:19:13.260942  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.269710  994709 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.282906  994709 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.291613  994709 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:19:13.298701  994709 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:19:13.306154  994709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:19:13.416108  994709 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:19:13.549800  994709 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:19:13.549963  994709 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:19:13.553947  994709 start.go:564] Will wait 60s for crictl version
	I1002 20:19:13.554015  994709 ssh_runner.go:195] Run: which crictl
	I1002 20:19:13.557729  994709 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:19:13.584434  994709 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:19:13.584598  994709 ssh_runner.go:195] Run: crio --version
	I1002 20:19:13.611885  994709 ssh_runner.go:195] Run: crio --version
	I1002 20:19:13.643761  994709 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:19:13.646706  994709 cli_runner.go:164] Run: docker network inspect addons-693704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:19:13.662159  994709 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:19:13.665953  994709 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:19:13.675384  994709 kubeadm.go:883] updating cluster {Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:19:13.675498  994709 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:19:13.675559  994709 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:19:13.707568  994709 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:19:13.707592  994709 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:19:13.707650  994709 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:19:13.733091  994709 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:19:13.733117  994709 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:19:13.733126  994709 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:19:13.733260  994709 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-693704 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:19:13.733342  994709 ssh_runner.go:195] Run: crio config
	I1002 20:19:13.792130  994709 cni.go:84] Creating CNI manager for ""
	I1002 20:19:13.792153  994709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:19:13.792194  994709 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:19:13.792227  994709 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-693704 NodeName:addons-693704 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:19:13.792401  994709 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-693704"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:19:13.792492  994709 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:19:13.800668  994709 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:19:13.800767  994709 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:19:13.808293  994709 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 20:19:13.821242  994709 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:19:13.834169  994709 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1002 20:19:13.846928  994709 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:19:13.850566  994709 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:19:13.860224  994709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:19:13.968588  994709 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:19:13.985352  994709 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704 for IP: 192.168.49.2
	I1002 20:19:13.985422  994709 certs.go:195] generating shared ca certs ...
	I1002 20:19:13.985470  994709 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:13.985658  994709 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 20:19:15.330293  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt ...
	I1002 20:19:15.330325  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt: {Name:mk4cd3e6dd08eb98d92774a50706472e7144a029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.330529  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key ...
	I1002 20:19:15.330543  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key: {Name:mk973528442a241534dab3b3f10010ef617c41eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.330647  994709 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 20:19:15.997150  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt ...
	I1002 20:19:15.997181  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt: {Name:mk99f3de897f678c1a5844576ab27113951f2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.997373  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key ...
	I1002 20:19:15.997386  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key: {Name:mka357a75cbeebaba7cc94478a077ee2190bafb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:15.997484  994709 certs.go:257] generating profile certs ...
	I1002 20:19:15.997541  994709 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.key
	I1002 20:19:15.997561  994709 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt with IP's: []
	I1002 20:19:16.185268  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt ...
	I1002 20:19:16.185298  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: {Name:mk19c4790d2aed31a89cf09dcf81ae3f076c409b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.185485  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.key ...
	I1002 20:19:16.185498  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.key: {Name:mk1b58c21fd0fb98ae80d1aeead9a8a2c7b84f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.185581  994709 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d
	I1002 20:19:16.185600  994709 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:19:16.909759  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d ...
	I1002 20:19:16.909792  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d: {Name:mkcdcc8a35d2bead0bc666b364b50007c53b8ec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.910784  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d ...
	I1002 20:19:16.910803  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d: {Name:mk54e705787535bd0f02f9a6cb06ac271457b26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:16.911454  994709 certs.go:382] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt.4674427d -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt
	I1002 20:19:16.911552  994709 certs.go:386] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key.4674427d -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key
	I1002 20:19:16.911609  994709 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key
	I1002 20:19:16.911632  994709 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt with IP's: []
	I1002 20:19:17.189632  994709 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt ...
	I1002 20:19:17.189663  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt: {Name:mkc2967e5b8de8de5ffc244b2174ce7d1307c7fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:17.189855  994709 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key ...
	I1002 20:19:17.189870  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key: {Name:mk3a5d9aa39ed72b68b1236fc674f044b595f3a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:17.190670  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 20:19:17.190720  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 20:19:17.190746  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:19:17.190775  994709 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 20:19:17.191345  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:19:17.209222  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:19:17.228051  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:19:17.245976  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 20:19:17.263876  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 20:19:17.281588  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:19:17.300066  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:19:17.317623  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:19:17.335889  994709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:19:17.355499  994709 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:19:17.368597  994709 ssh_runner.go:195] Run: openssl version
	I1002 20:19:17.375290  994709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:19:17.383559  994709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:19:17.387356  994709 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:19:17.387462  994709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:19:17.428204  994709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:19:17.436613  994709 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:19:17.440314  994709 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:19:17.440367  994709 kubeadm.go:400] StartCluster: {Name:addons-693704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-693704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:19:17.440454  994709 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:19:17.440516  994709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:19:17.467595  994709 cri.go:89] found id: ""
	I1002 20:19:17.467677  994709 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:19:17.475494  994709 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:19:17.483312  994709 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:19:17.483390  994709 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:19:17.491411  994709 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:19:17.491431  994709 kubeadm.go:157] found existing configuration files:
	
	I1002 20:19:17.491483  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:19:17.499089  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:19:17.499169  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:19:17.506794  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:19:17.514714  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:19:17.514785  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:19:17.522181  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:19:17.530993  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:19:17.531060  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:19:17.538976  994709 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:19:17.546795  994709 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:19:17.546892  994709 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:19:17.554492  994709 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:19:17.596193  994709 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:19:17.596303  994709 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:19:17.627320  994709 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:19:17.627397  994709 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 20:19:17.627440  994709 kubeadm.go:318] OS: Linux
	I1002 20:19:17.627493  994709 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:19:17.627548  994709 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 20:19:17.627604  994709 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:19:17.627659  994709 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:19:17.627714  994709 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:19:17.627769  994709 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:19:17.627820  994709 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:19:17.627872  994709 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:19:17.627924  994709 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 20:19:17.698891  994709 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:19:17.699015  994709 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:19:17.699132  994709 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:19:17.708645  994709 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:19:17.711822  994709 out.go:252]   - Generating certificates and keys ...
	I1002 20:19:17.711957  994709 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:19:17.712048  994709 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:19:17.858214  994709 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:19:19.472133  994709 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:19:19.853869  994709 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:19:20.278527  994709 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:19:21.038810  994709 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:19:21.039005  994709 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-693704 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:19:21.583298  994709 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:19:21.583465  994709 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-693704 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:19:22.178821  994709 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:19:22.869729  994709 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:19:23.067072  994709 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:19:23.067180  994709 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:19:23.190079  994709 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:19:23.633624  994709 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:19:23.861907  994709 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:19:24.252326  994709 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:19:24.757359  994709 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:19:24.758089  994709 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:19:24.760711  994709 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:19:24.764198  994709 out.go:252]   - Booting up control plane ...
	I1002 20:19:24.764310  994709 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:19:24.764403  994709 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:19:24.764489  994709 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:19:24.780867  994709 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:19:24.781188  994709 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:19:24.788581  994709 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:19:24.789049  994709 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:19:24.789397  994709 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:19:24.926323  994709 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:19:24.926459  994709 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:19:26.427259  994709 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501639322s
	I1002 20:19:26.430848  994709 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:19:26.430969  994709 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:19:26.431069  994709 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:19:26.431155  994709 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:19:28.445585  994709 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.013932999s
	I1002 20:19:30.026061  994709 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.595131543s
	I1002 20:19:31.934100  994709 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.501085496s
	I1002 20:19:31.955369  994709 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 20:19:31.978849  994709 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 20:19:32.006745  994709 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 20:19:32.007240  994709 kubeadm.go:318] [mark-control-plane] Marking the node addons-693704 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 20:19:32.024906  994709 kubeadm.go:318] [bootstrap-token] Using token: 1gg1hv.lld6lawd4ni62mxk
	I1002 20:19:32.028031  994709 out.go:252]   - Configuring RBAC rules ...
	I1002 20:19:32.028186  994709 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 20:19:32.038937  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 20:19:32.049818  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 20:19:32.054935  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 20:19:32.062162  994709 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 20:19:32.070713  994709 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 20:19:32.338182  994709 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 20:19:32.784741  994709 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 20:19:33.338747  994709 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 20:19:33.340165  994709 kubeadm.go:318] 
	I1002 20:19:33.340273  994709 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 20:19:33.340285  994709 kubeadm.go:318] 
	I1002 20:19:33.340381  994709 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 20:19:33.340391  994709 kubeadm.go:318] 
	I1002 20:19:33.340426  994709 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 20:19:33.340507  994709 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 20:19:33.340581  994709 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 20:19:33.340595  994709 kubeadm.go:318] 
	I1002 20:19:33.340666  994709 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 20:19:33.340674  994709 kubeadm.go:318] 
	I1002 20:19:33.340728  994709 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 20:19:33.340734  994709 kubeadm.go:318] 
	I1002 20:19:33.340801  994709 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 20:19:33.340885  994709 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 20:19:33.340967  994709 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 20:19:33.340973  994709 kubeadm.go:318] 
	I1002 20:19:33.341069  994709 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 20:19:33.341173  994709 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 20:19:33.341179  994709 kubeadm.go:318] 
	I1002 20:19:33.341310  994709 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 1gg1hv.lld6lawd4ni62mxk \
	I1002 20:19:33.341442  994709 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 \
	I1002 20:19:33.341466  994709 kubeadm.go:318] 	--control-plane 
	I1002 20:19:33.341470  994709 kubeadm.go:318] 
	I1002 20:19:33.341572  994709 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 20:19:33.341578  994709 kubeadm.go:318] 
	I1002 20:19:33.341672  994709 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 1gg1hv.lld6lawd4ni62mxk \
	I1002 20:19:33.341797  994709 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 
	I1002 20:19:33.345719  994709 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 20:19:33.345963  994709 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 20:19:33.346097  994709 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:19:33.346131  994709 cni.go:84] Creating CNI manager for ""
	I1002 20:19:33.346146  994709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:19:33.349554  994709 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 20:19:33.352542  994709 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 20:19:33.358001  994709 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 20:19:33.358065  994709 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 20:19:33.375272  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 20:19:33.656465  994709 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:19:33.656564  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:33.656619  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-693704 minikube.k8s.io/updated_at=2025_10_02T20_19_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b minikube.k8s.io/name=addons-693704 minikube.k8s.io/primary=true
	I1002 20:19:33.838722  994709 ops.go:34] apiserver oom_adj: -16
	I1002 20:19:33.838894  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:34.339235  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:34.839327  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:35.339115  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:35.839347  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:36.339936  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:36.838951  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:37.339896  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:37.839301  994709 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:37.981403  994709 kubeadm.go:1113] duration metric: took 4.324906426s to wait for elevateKubeSystemPrivileges
	I1002 20:19:37.981430  994709 kubeadm.go:402] duration metric: took 20.541068078s to StartCluster
	I1002 20:19:37.981448  994709 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:37.982146  994709 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:19:37.982540  994709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:37.982732  994709 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:19:37.982850  994709 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 20:19:37.983086  994709 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:19:37.983116  994709 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 20:19:37.983227  994709 addons.go:69] Setting yakd=true in profile "addons-693704"
	I1002 20:19:37.983240  994709 addons.go:238] Setting addon yakd=true in "addons-693704"
	I1002 20:19:37.983262  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.983805  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.983948  994709 addons.go:69] Setting inspektor-gadget=true in profile "addons-693704"
	I1002 20:19:37.983963  994709 addons.go:238] Setting addon inspektor-gadget=true in "addons-693704"
	I1002 20:19:37.983984  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.984372  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.984784  994709 addons.go:69] Setting metrics-server=true in profile "addons-693704"
	I1002 20:19:37.984803  994709 addons.go:238] Setting addon metrics-server=true in "addons-693704"
	I1002 20:19:37.984846  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.985255  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.986812  994709 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-693704"
	I1002 20:19:37.987111  994709 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-693704"
	I1002 20:19:37.987164  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.986986  994709 addons.go:69] Setting cloud-spanner=true in profile "addons-693704"
	I1002 20:19:37.988662  994709 addons.go:238] Setting addon cloud-spanner=true in "addons-693704"
	I1002 20:19:37.988715  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.989206  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.986995  994709 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-693704"
	I1002 20:19:37.992261  994709 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-693704"
	I1002 20:19:37.992347  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.993008  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.993440  994709 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-693704"
	I1002 20:19:37.993470  994709 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-693704"
	I1002 20:19:37.993496  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:37.993939  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.986999  994709 addons.go:69] Setting default-storageclass=true in profile "addons-693704"
	I1002 20:19:37.999991  994709 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-693704"
	I1002 20:19:37.987003  994709 addons.go:69] Setting gcp-auth=true in profile "addons-693704"
	I1002 20:19:38.001780  994709 mustload.go:65] Loading cluster: addons-693704
	I1002 20:19:38.002068  994709 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:19:38.002442  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.004847  994709 addons.go:69] Setting registry=true in profile "addons-693704"
	I1002 20:19:38.004895  994709 addons.go:238] Setting addon registry=true in "addons-693704"
	I1002 20:19:38.004938  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.006258  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.987015  994709 addons.go:69] Setting ingress=true in profile "addons-693704"
	I1002 20:19:38.027270  994709 addons.go:238] Setting addon ingress=true in "addons-693704"
	I1002 20:19:38.027361  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.027894  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:37.987020  994709 addons.go:69] Setting ingress-dns=true in profile "addons-693704"
	I1002 20:19:38.058307  994709 addons.go:238] Setting addon ingress-dns=true in "addons-693704"
	I1002 20:19:38.058379  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.058921  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.096850  994709 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 20:19:38.105676  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 20:19:38.105709  994709 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 20:19:38.105842  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.008072  994709 out.go:179] * Verifying Kubernetes components...
	I1002 20:19:38.008152  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.026483  994709 addons.go:69] Setting registry-creds=true in profile "addons-693704"
	I1002 20:19:38.116211  994709 addons.go:238] Setting addon registry-creds=true in "addons-693704"
	I1002 20:19:38.116261  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.116877  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.148060  994709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:19:38.026500  994709 addons.go:69] Setting storage-provisioner=true in profile "addons-693704"
	I1002 20:19:38.148217  994709 addons.go:238] Setting addon storage-provisioner=true in "addons-693704"
	I1002 20:19:38.148254  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.148800  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.026507  994709 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-693704"
	I1002 20:19:38.181689  994709 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-693704"
	I1002 20:19:38.026527  994709 addons.go:69] Setting volcano=true in profile "addons-693704"
	I1002 20:19:38.185000  994709 addons.go:238] Setting addon volcano=true in "addons-693704"
	I1002 20:19:38.185048  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.200337  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.026533  994709 addons.go:69] Setting volumesnapshots=true in profile "addons-693704"
	I1002 20:19:38.221856  994709 addons.go:238] Setting addon volumesnapshots=true in "addons-693704"
	I1002 20:19:38.221908  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.222576  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.234975  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.241128  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 20:19:38.241462  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.027224  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.264137  994709 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 20:19:38.269034  994709 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 20:19:38.269076  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 20:19:38.269173  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.294256  994709 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 20:19:38.298092  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 20:19:38.298232  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 20:19:38.298258  994709 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 20:19:38.298339  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.305328  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 20:19:38.326652  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:19:38.333498  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:19:38.339026  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 20:19:38.339916  994709 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 20:19:38.340074  994709 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 20:19:38.348717  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 20:19:38.349240  994709 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:19:38.349263  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 20:19:38.349335  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.370496  994709 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 20:19:38.370522  994709 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 20:19:38.370590  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.393413  994709 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:19:38.393443  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 20:19:38.393518  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.393705  994709 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 20:19:38.401523  994709 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:19:38.401566  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 20:19:38.401656  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.415528  994709 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 20:19:38.419444  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 20:19:38.424637  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 20:19:38.430455  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 20:19:38.433425  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 20:19:38.434098  994709 out.go:179]   - Using image docker.io/registry:3.0.0
	W1002 20:19:38.437996  994709 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1002 20:19:38.442715  994709 addons.go:238] Setting addon default-storageclass=true in "addons-693704"
	I1002 20:19:38.442755  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.443165  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.443728  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.447652  994709 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 20:19:38.447679  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 20:19:38.447744  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.463660  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.464460  994709 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:19:38.466815  994709 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 20:19:38.467693  994709 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:19:38.467719  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:19:38.467819  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.470864  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 20:19:38.470890  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 20:19:38.470960  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.500926  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.502016  994709 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 20:19:38.503153  994709 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 20:19:38.510195  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 20:19:38.510222  994709 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 20:19:38.510304  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.511213  994709 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 20:19:38.512545  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.514344  994709 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-693704"
	I1002 20:19:38.514385  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:38.514794  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:38.538485  994709 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:19:38.538505  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 20:19:38.538577  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.563237  994709 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:19:38.563266  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 20:19:38.563330  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.573905  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.605278  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.621692  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.637902  994709 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:19:38.637933  994709 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:19:38.638002  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.655698  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.682118  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.689646  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.707346  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.731079  994709 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 20:19:38.738329  994709 out.go:179]   - Using image docker.io/busybox:stable
	I1002 20:19:38.738517  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.739582  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	W1002 20:19:38.741646  994709 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:19:38.741686  994709 retry.go:31] will retry after 354.664397ms: ssh: handshake failed: EOF
	I1002 20:19:38.741822  994709 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:19:38.741834  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 20:19:38.741914  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:38.754174  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:38.790638  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	W1002 20:19:38.791850  994709 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:19:38.791874  994709 retry.go:31] will retry after 168.291026ms: ssh: handshake failed: EOF
	I1002 20:19:38.891518  994709 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1002 20:19:38.961324  994709 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 20:19:38.961355  994709 retry.go:31] will retry after 311.734351ms: ssh: handshake failed: EOF
	I1002 20:19:39.180793  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 20:19:39.180831  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 20:19:39.246769  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 20:19:39.246793  994709 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 20:19:39.317148  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 20:19:39.317174  994709 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 20:19:39.327274  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 20:19:39.369305  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:19:39.371258  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:19:39.386300  994709 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 20:19:39.386327  994709 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 20:19:39.412476  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 20:19:39.412502  994709 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 20:19:39.447295  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:19:39.454691  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 20:19:39.454712  994709 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 20:19:39.483532  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:19:39.489546  994709 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:39.489572  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 20:19:39.600950  994709 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 20:19:39.600977  994709 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 20:19:39.608088  994709 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:19:39.608113  994709 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 20:19:39.625123  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 20:19:39.625149  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 20:19:39.646231  994709 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:19:39.646256  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 20:19:39.666494  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:39.667190  994709 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:19:39.667209  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 20:19:39.670888  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:19:39.686238  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:19:39.763670  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:19:39.778706  994709 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 20:19:39.778734  994709 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 20:19:39.800126  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:19:39.803147  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:19:39.824074  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 20:19:39.824103  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 20:19:39.826926  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:19:39.887787  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:19:39.970247  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 20:19:39.970276  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 20:19:39.982837  994709 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 20:19:39.982863  994709 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 20:19:40.095977  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 20:19:40.096005  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 20:19:40.202267  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 20:19:40.202301  994709 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 20:19:40.252464  994709 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 20:19:40.252492  994709 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 20:19:40.425953  994709 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:19:40.425979  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 20:19:40.440769  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 20:19:40.440793  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 20:19:40.651360  994709 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.759801869s)
	I1002 20:19:40.651360  994709 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.140115117s)
	I1002 20:19:40.651466  994709 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 20:19:40.652113  994709 node_ready.go:35] waiting up to 6m0s for node "addons-693704" to be "Ready" ...
	I1002 20:19:40.708925  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:19:40.740283  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 20:19:40.740311  994709 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 20:19:41.000182  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 20:19:41.000218  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 20:19:41.157742  994709 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-693704" context rescaled to 1 replicas
	I1002 20:19:41.160542  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 20:19:41.160566  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 20:19:41.368904  994709 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 20:19:41.368930  994709 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 20:19:41.434210  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.106899571s)
	I1002 20:19:41.434277  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.064948233s)
	I1002 20:19:41.546392  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1002 20:19:42.681278  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:44.305558  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.934264933s)
	I1002 20:19:44.305591  994709 addons.go:479] Verifying addon ingress=true in "addons-693704"
	I1002 20:19:44.305742  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.858421462s)
	I1002 20:19:44.305803  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.822248913s)
	I1002 20:19:44.306140  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.639611107s)
	W1002 20:19:44.306168  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:44.306190  994709 retry.go:31] will retry after 271.617135ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:44.306249  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.635338018s)
	I1002 20:19:44.306301  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.62003767s)
	I1002 20:19:44.306341  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.542651372s)
	I1002 20:19:44.306505  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.506354272s)
	I1002 20:19:44.306533  994709 addons.go:479] Verifying addon registry=true in "addons-693704"
	I1002 20:19:44.306707  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.503533192s)
	I1002 20:19:44.306720  994709 addons.go:479] Verifying addon metrics-server=true in "addons-693704"
	I1002 20:19:44.306759  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.479800741s)
	I1002 20:19:44.307143  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.41932494s)
	I1002 20:19:44.307220  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.598267016s)
	W1002 20:19:44.307774  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:19:44.307787  994709 retry.go:31] will retry after 292.505551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:19:44.308765  994709 out.go:179] * Verifying ingress addon...
	I1002 20:19:44.312945  994709 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-693704 service yakd-dashboard -n yakd-dashboard
	
	I1002 20:19:44.313054  994709 out.go:179] * Verifying registry addon...
	I1002 20:19:44.315485  994709 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 20:19:44.317462  994709 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 20:19:44.330428  994709 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 20:19:44.330450  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:44.330653  994709 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 20:19:44.330663  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 20:19:44.357589  994709 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 20:19:44.577967  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:44.601481  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:19:44.645691  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.099252349s)
	I1002 20:19:44.645728  994709 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-693704"
	I1002 20:19:44.650504  994709 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 20:19:44.655039  994709 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 20:19:44.667816  994709 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 20:19:44.667846  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:44.821715  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:44.822383  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 20:19:45.161026  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:45.165268  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:45.325696  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:45.325851  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:45.657820  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:45.818501  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:45.820022  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:45.829170  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.25115874s)
	W1002 20:19:45.829204  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:45.829226  994709 retry.go:31] will retry after 265.136863ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:45.829298  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.227785836s)
	I1002 20:19:45.919439  994709 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 20:19:45.919542  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:45.937711  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:46.064145  994709 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 20:19:46.077198  994709 addons.go:238] Setting addon gcp-auth=true in "addons-693704"
	I1002 20:19:46.077246  994709 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:19:46.077691  994709 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:19:46.095085  994709 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 20:19:46.095135  994709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:19:46.095095  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:46.123058  994709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:19:46.164369  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:46.319756  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:46.321805  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:46.659517  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:46.818237  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:46.819904  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 20:19:46.919069  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:46.919103  994709 retry.go:31] will retry after 624.133237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:46.922816  994709 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:19:46.925777  994709 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 20:19:46.928684  994709 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 20:19:46.928707  994709 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 20:19:46.942491  994709 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 20:19:46.942514  994709 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 20:19:46.955438  994709 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:19:46.955505  994709 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 20:19:46.968124  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:19:47.157960  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:47.322368  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:47.322695  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:47.436497  994709 addons.go:479] Verifying addon gcp-auth=true in "addons-693704"
	I1002 20:19:47.440771  994709 out.go:179] * Verifying gcp-auth addon...
	I1002 20:19:47.444303  994709 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 20:19:47.456952  994709 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 20:19:47.457022  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:47.544036  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:19:47.655544  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:47.657853  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:47.819482  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:47.821740  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:47.947877  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:48.158799  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:48.321611  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:48.322176  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 20:19:48.351318  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:48.351351  994709 retry.go:31] will retry after 722.588456ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:48.447412  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:48.658545  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:48.819500  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:48.821008  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:48.947811  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:49.074176  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:49.159044  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:49.319369  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:49.321354  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:49.447565  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:49.655967  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:49.657396  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:49.821534  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:49.821767  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 20:19:49.880261  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:49.880299  994709 retry.go:31] will retry after 823.045422ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:49.948030  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:50.158812  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:50.318859  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:50.321025  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:50.448207  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:50.657430  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:50.703742  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:50.819118  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:50.821057  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:50.948368  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:51.157785  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:51.320463  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:51.321544  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:51.448039  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:51.519077  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:51.519109  994709 retry.go:31] will retry after 1.329942428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:51.658147  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:51.820515  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:51.820951  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:51.947804  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:52.155980  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:52.158167  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:52.319637  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:52.321091  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:52.448243  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:52.657697  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:52.819249  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:52.821572  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:52.849787  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:52.949420  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:53.160825  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:53.319057  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:53.321137  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:53.448348  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:53.651601  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:53.651634  994709 retry.go:31] will retry after 4.065518596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:53.657468  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:53.820524  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:53.821033  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:53.948075  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:54.157447  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:54.158479  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:54.318431  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:54.320091  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:54.447825  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:54.657905  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:54.819025  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:54.820709  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:54.947593  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:55.158249  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:55.320256  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:55.320691  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:55.447448  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:55.658171  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:55.820678  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:55.821069  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:55.948074  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:56.157411  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:56.319659  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:56.320449  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:56.447640  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:56.655854  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:56.657871  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:56.818780  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:56.820792  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:56.947591  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:57.157766  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:57.318816  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:57.320927  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:57.447823  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:57.657501  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:57.717603  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:57.820669  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:57.822065  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:57.948192  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:58.157875  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:58.319486  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:58.321536  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:58.447507  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:58.508047  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:58.508078  994709 retry.go:31] will retry after 6.392155287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:58.657525  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:58.818599  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:58.820265  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:58.947800  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:59.155950  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:19:59.158057  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:59.321355  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:59.321502  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:59.447568  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:59.657515  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:59.818527  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:59.820423  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:59.947158  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:00.191965  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:00.322779  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:00.323712  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:00.462450  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:00.662487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:00.820978  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:00.821119  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:00.947103  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:01.165936  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:01.319105  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:01.321152  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:01.448705  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:01.656452  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:01.660465  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:01.820149  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:01.822237  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:01.949425  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:02.159485  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:02.320094  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:02.320855  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:02.447847  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:02.658087  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:02.822950  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:02.823232  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:02.948025  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:03.158590  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:03.318905  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:03.321355  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:03.447723  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:03.657871  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:03.821238  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:03.821662  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:03.947536  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:04.157181  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:04.158586  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:04.319406  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:04.320569  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:04.448026  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:04.657883  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:04.821087  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:04.821316  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:04.900418  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:04.947850  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:05.159494  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:05.319260  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:05.321183  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:05.448018  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:05.659872  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 20:20:05.704226  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:05.704266  994709 retry.go:31] will retry after 4.650395594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:05.819910  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:05.820237  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:05.947300  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:06.157427  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:06.158681  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:06.319989  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:06.321509  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:06.447503  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:06.658321  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:06.819075  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:06.820269  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:06.948556  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:07.158188  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:07.319456  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:07.320273  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:07.448160  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:07.657768  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:07.820523  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:07.821011  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:07.947761  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:08.157867  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:08.323022  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:08.323328  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:08.447949  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:08.655164  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:08.657821  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:08.820915  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:08.822270  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:08.947285  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:09.157631  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:09.319269  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:09.320630  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:09.447999  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:09.657541  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:09.821314  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:09.821825  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:09.947519  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:10.158695  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:10.320550  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:10.322127  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:10.355287  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:10.448320  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:10.655677  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:10.658684  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:10.819582  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:10.820893  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:10.948135  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:11.160067  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 20:20:11.205481  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:11.205529  994709 retry.go:31] will retry after 8.886793783s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:11.319286  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:11.320699  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:11.447959  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:11.658932  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:11.818675  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:11.820427  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:11.947127  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:12.157818  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:12.319903  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:12.320793  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:12.447987  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:12.657853  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:12.819021  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:12.820692  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:12.947551  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:13.156319  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:13.159173  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:13.319051  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:13.321143  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:13.448160  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:13.657596  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:13.820773  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:13.820960  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:13.948072  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:14.158231  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:14.319445  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:14.320543  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:14.447788  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:14.658082  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:14.819689  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:14.821091  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:14.948202  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:15.157836  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:15.319547  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:15.321065  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:15.448065  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:15.654975  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:15.658703  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:15.819187  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:15.823588  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:15.947274  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:16.158585  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:16.318872  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:16.321029  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:16.448029  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:16.658178  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:16.819331  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:16.819902  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:16.947835  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:17.158511  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:17.319014  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:17.320821  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:17.447892  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:17.658439  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:17.818480  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:17.820595  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:17.947741  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:18.157451  994709 node_ready.go:57] node "addons-693704" has "Ready":"False" status (will retry)
	I1002 20:20:18.159031  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:18.320870  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:18.321273  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:18.448214  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:18.658565  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:18.819116  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:18.821998  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:18.948071  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:19.175178  994709 node_ready.go:49] node "addons-693704" is "Ready"
	I1002 20:20:19.175210  994709 node_ready.go:38] duration metric: took 38.523057861s for node "addons-693704" to be "Ready" ...
	I1002 20:20:19.175224  994709 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:20:19.175288  994709 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:19.193541  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:19.198169  994709 api_server.go:72] duration metric: took 41.215410635s to wait for apiserver process to appear ...
	I1002 20:20:19.198244  994709 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:20:19.198278  994709 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 20:20:19.210833  994709 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 20:20:19.213021  994709 api_server.go:141] control plane version: v1.34.1
	I1002 20:20:19.213118  994709 api_server.go:131] duration metric: took 14.852434ms to wait for apiserver health ...
	I1002 20:20:19.213143  994709 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:20:19.259918  994709 system_pods.go:59] 18 kube-system pods found
	I1002 20:20:19.260007  994709 system_pods.go:61] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending
	I1002 20:20:19.260029  994709 system_pods.go:61] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending
	I1002 20:20:19.260046  994709 system_pods.go:61] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending
	I1002 20:20:19.260082  994709 system_pods.go:61] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:19.260110  994709 system_pods.go:61] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:19.260130  994709 system_pods.go:61] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:19.260165  994709 system_pods.go:61] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:19.260195  994709 system_pods.go:61] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 20:20:19.260219  994709 system_pods.go:61] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:19.260254  994709 system_pods.go:61] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:19.260278  994709 system_pods.go:61] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending
	I1002 20:20:19.260300  994709 system_pods.go:61] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending
	I1002 20:20:19.260337  994709 system_pods.go:61] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending
	I1002 20:20:19.260361  994709 system_pods.go:61] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending
	I1002 20:20:19.260379  994709 system_pods.go:61] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending
	I1002 20:20:19.260414  994709 system_pods.go:61] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending
	I1002 20:20:19.260436  994709 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending
	I1002 20:20:19.260455  994709 system_pods.go:61] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending
	I1002 20:20:19.260473  994709 system_pods.go:74] duration metric: took 47.310617ms to wait for pod list to return data ...
	I1002 20:20:19.260513  994709 default_sa.go:34] waiting for default service account to be created ...
	I1002 20:20:19.273557  994709 default_sa.go:45] found service account: "default"
	I1002 20:20:19.273635  994709 default_sa.go:55] duration metric: took 13.103031ms for default service account to be created ...
	I1002 20:20:19.273660  994709 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 20:20:19.293816  994709 system_pods.go:86] 18 kube-system pods found
	I1002 20:20:19.293898  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending
	I1002 20:20:19.293920  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending
	I1002 20:20:19.293938  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending
	I1002 20:20:19.293975  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:19.294002  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:19.294023  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:19.294068  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:19.294095  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending
	I1002 20:20:19.294114  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:19.294148  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:19.294173  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending
	I1002 20:20:19.294198  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending
	I1002 20:20:19.294246  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending
	I1002 20:20:19.294273  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending
	I1002 20:20:19.294296  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending
	I1002 20:20:19.294328  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending
	I1002 20:20:19.294351  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending
	I1002 20:20:19.294370  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending
	I1002 20:20:19.294416  994709 retry.go:31] will retry after 259.220758ms: missing components: kube-dns
	I1002 20:20:19.349532  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:19.350103  994709 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 20:20:19.350175  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:19.523669  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:19.643831  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:19.643867  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending
	I1002 20:20:19.643879  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:19.643887  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:19.643893  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending
	I1002 20:20:19.643899  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:19.643904  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:19.643909  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:19.643918  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:19.643923  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending
	I1002 20:20:19.643931  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:19.643935  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:19.643940  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending
	I1002 20:20:19.643944  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending
	I1002 20:20:19.643948  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending
	I1002 20:20:19.643961  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending
	I1002 20:20:19.643965  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending
	I1002 20:20:19.643972  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:19.643980  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending
	I1002 20:20:19.643985  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending
	I1002 20:20:19.644006  994709 retry.go:31] will retry after 341.024008ms: missing components: kube-dns
	I1002 20:20:19.671892  994709 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 20:20:19.671917  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:19.827024  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:19.828000  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:19.961916  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:20.012275  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:20.012323  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:20:20.012334  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:20.012342  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:20.012350  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:20.012356  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:20.012362  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:20.012372  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:20.012377  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:20.012388  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:20.012400  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:20.012405  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:20.012412  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:20.012423  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:20.012429  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:20.012437  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:20.012448  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:20.012455  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.012463  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.012473  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 20:20:20.012491  994709 retry.go:31] will retry after 476.605934ms: missing components: kube-dns
	I1002 20:20:20.092973  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:20.160870  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:20.323333  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:20.326140  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:20.449179  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:20.500973  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:20.501060  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:20:20.501104  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:20.501129  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:20.501166  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:20.501192  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:20.501214  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:20.501249  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:20.501273  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:20.501296  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:20.501332  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:20.501358  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:20.501381  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:20.501417  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:20.501444  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:20.501467  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:20.501502  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:20.501531  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.501554  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.501589  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Running
	I1002 20:20:20.501625  994709 retry.go:31] will retry after 439.708141ms: missing components: kube-dns
	I1002 20:20:20.672849  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:20.819664  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:20.823622  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:20.948959  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:20.951441  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:20.951521  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:20:20.951545  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:20.951570  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:20.951663  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:20.951686  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:20.951728  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:20.951751  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:20.951769  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:20.951805  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:20.951826  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:20.951847  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:20.951883  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:20.951908  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:20.951932  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:20.951970  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:20.951997  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:20.952021  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.952055  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:20.952078  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Running
	I1002 20:20:20.952108  994709 retry.go:31] will retry after 739.124115ms: missing components: kube-dns
	I1002 20:20:21.175706  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:21.321496  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:21.322173  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:21.447868  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:21.558307  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.465295653s)
	W1002 20:20:21.558346  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:21.558363  994709 retry.go:31] will retry after 14.276526589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:21.659390  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:21.696852  994709 system_pods.go:86] 19 kube-system pods found
	I1002 20:20:21.696889  994709 system_pods.go:89] "coredns-66bc5c9577-4kbq4" [0d4e97ee-4cf7-41ea-9d83-209043bf21bf] Running
	I1002 20:20:21.696903  994709 system_pods.go:89] "csi-hostpath-attacher-0" [aa5c6bed-34c0-4987-90bd-13fdb2ab7eef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 20:20:21.696912  994709 system_pods.go:89] "csi-hostpath-resizer-0" [39e4e0dd-c03e-442e-ba96-5ebfdda37746] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 20:20:21.696919  994709 system_pods.go:89] "csi-hostpathplugin-kkptd" [bc99365e-9c3b-4327-897f-52a44e266766] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 20:20:21.696928  994709 system_pods.go:89] "etcd-addons-693704" [d5a0dac1-88ea-4c54-b50f-af829bf0bc24] Running
	I1002 20:20:21.696933  994709 system_pods.go:89] "kindnet-p9zvn" [c8066bb8-8c46-4486-9a3a-475495929dad] Running
	I1002 20:20:21.696952  994709 system_pods.go:89] "kube-apiserver-addons-693704" [cdc1b96a-73cf-47cf-8088-42ee18bb7338] Running
	I1002 20:20:21.696957  994709 system_pods.go:89] "kube-controller-manager-addons-693704" [5ae99433-ccea-40f7-9fa5-edbb782e847b] Running
	I1002 20:20:21.696969  994709 system_pods.go:89] "kube-ingress-dns-minikube" [8910ab58-924f-499f-83da-d3db5ecb8c36] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:20:21.696973  994709 system_pods.go:89] "kube-proxy-gdxqs" [836c79d6-75e4-42c8-95a9-e463748ae25e] Running
	I1002 20:20:21.696977  994709 system_pods.go:89] "kube-scheduler-addons-693704" [7e76fa52-2719-4a8b-9ab1-a4fd78b1858f] Running
	I1002 20:20:21.696984  994709 system_pods.go:89] "metrics-server-85b7d694d7-8pl6l" [de09d76d-ec2f-415e-8e03-e9fc9e52d224] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:20:21.696990  994709 system_pods.go:89] "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:20:21.696997  994709 system_pods.go:89] "registry-66898fdd98-8rftt" [d8c315b0-0f59-4d80-816b-877d0845e06b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:20:21.697004  994709 system_pods.go:89] "registry-creds-764b6fb674-6cg6b" [d16ac5e8-a382-4faa-85dc-039ac18fa4cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:20:21.697010  994709 system_pods.go:89] "registry-proxy-2kw45" [e1a872c1-9255-437f-94c5-91e9a59ff3c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:20:21.697017  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-49h86" [8fe00ea7-f8d0-41db-a615-c2232d031538] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:21.697023  994709 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bw7rc" [e1a79369-8218-4061-a4dc-7723199c1e0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 20:20:21.697030  994709 system_pods.go:89] "storage-provisioner" [2270acaf-10e4-4ab4-a65d-3a202593e529] Running
	I1002 20:20:21.697039  994709 system_pods.go:126] duration metric: took 2.42335813s to wait for k8s-apps to be running ...
	I1002 20:20:21.697049  994709 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:20:21.697109  994709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:20:21.712608  994709 system_svc.go:56] duration metric: took 15.548645ms WaitForService to wait for kubelet
	I1002 20:20:21.712637  994709 kubeadm.go:586] duration metric: took 43.729883809s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:20:21.712662  994709 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:20:21.716152  994709 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 20:20:21.716184  994709 node_conditions.go:123] node cpu capacity is 2
	I1002 20:20:21.716196  994709 node_conditions.go:105] duration metric: took 3.528491ms to run NodePressure ...
	I1002 20:20:21.716212  994709 start.go:242] waiting for startup goroutines ...
	I1002 20:20:21.822012  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:21.823203  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:21.948612  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:22.159863  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:22.319122  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:22.321160  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:22.448407  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:22.659710  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:22.819576  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:22.822386  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:22.948013  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:23.158517  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:23.320332  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:23.321199  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:23.448043  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:23.658814  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:23.819698  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:23.821542  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:23.947452  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:24.159652  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:24.320759  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:24.321094  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:24.448153  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:24.659358  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:24.818645  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:24.821517  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:24.947484  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:25.159952  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:25.321433  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:25.321885  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:25.447985  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:25.658784  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:25.819014  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:25.821666  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:25.948082  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:26.158745  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:26.320197  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:26.321222  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:26.447719  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:26.659182  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:26.820428  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:26.822051  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:26.948367  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:27.160977  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:27.320573  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:27.321652  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:27.447890  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:27.658939  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:27.818985  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:27.821059  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:27.948366  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:28.161780  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:28.320321  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:28.321410  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:28.447506  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:28.658747  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:28.818976  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:28.821650  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:28.947845  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:29.159622  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:29.319270  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:29.321801  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:29.448168  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:29.658794  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:29.819079  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:29.821429  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:29.947641  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:30.159369  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:30.321561  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:30.321972  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:30.450696  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:30.659510  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:30.819828  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:30.821734  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:30.948076  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:31.159094  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:31.321697  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:31.322081  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:31.448086  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:31.658821  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:31.818887  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:31.821458  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:31.947963  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:32.159614  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:32.320675  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:32.322256  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:32.447303  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:32.659923  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:32.820647  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:32.822321  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:32.947394  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:33.159274  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:33.321237  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:33.321628  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:33.448072  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:33.658574  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:33.818908  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:33.821510  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:33.947671  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:34.159537  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:34.319486  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:34.320732  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:34.447992  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:34.659409  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:34.818851  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:34.821162  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:34.948557  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:35.160400  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:35.319095  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:35.321775  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:35.448790  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:35.659951  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:35.821520  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:35.823605  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:35.835876  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:35.949194  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:36.163303  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:36.369184  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:36.369321  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:36.447567  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:36.659548  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:36.819011  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:36.821548  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:36.947353  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:37.013829  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.177912886s)
	W1002 20:20:37.013873  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:37.013894  994709 retry.go:31] will retry after 16.584617559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:37.159246  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:37.320047  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:37.320218  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:37.447625  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:37.659969  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:37.819508  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:37.822005  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:37.948056  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:38.159157  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:38.319619  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:38.321829  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:38.448325  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:38.659094  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:38.819553  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:38.822084  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:38.948224  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:39.158955  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:39.320358  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:39.321896  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:39.449482  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:39.658678  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:39.819596  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:39.822618  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:39.948042  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:40.159165  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:40.321897  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:40.322102  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:40.448692  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:40.659424  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:40.820442  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:40.822438  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:40.953063  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:41.160230  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:41.324908  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:41.325018  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:41.448365  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:41.659981  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:41.819204  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:41.825800  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:41.948326  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:42.160221  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:42.323678  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:42.323892  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:42.448685  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:42.658968  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:42.820548  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:42.825595  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:42.948014  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:43.164487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:43.325308  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:43.325546  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:43.447728  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:43.659083  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:43.819978  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:43.821102  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:43.948368  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:44.159319  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:44.322007  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:44.323000  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:44.448438  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:44.658701  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:44.818251  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:44.822093  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:44.948073  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:45.161234  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:45.337364  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:45.337615  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:45.448555  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:45.659203  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:45.820630  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:45.822020  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:45.948309  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:46.158793  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:46.322305  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:46.323889  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:46.449028  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:46.658214  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:46.821838  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:46.822319  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:46.948024  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:47.168388  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:47.319302  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:47.321739  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:47.447694  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:47.659702  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:47.818326  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:47.821106  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:47.948063  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:48.159478  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:48.321404  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:48.321977  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:48.448403  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:48.658631  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:48.818698  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:48.820834  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:48.947578  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:49.159437  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:49.321139  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:49.321707  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:49.447554  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:49.659009  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:49.819029  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:49.821580  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:49.947616  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:50.160129  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:50.320228  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:50.321534  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:50.451851  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:50.660002  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:50.820960  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:50.822905  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:50.947934  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:51.161193  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:51.320670  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:51.320931  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:51.447529  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:51.672034  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:51.823387  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:51.823949  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:51.949349  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:52.159584  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:52.321246  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:52.323112  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:52.450831  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:52.661759  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:52.819601  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:52.822812  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:52.948147  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:53.158260  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:53.320954  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:53.321416  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:53.447684  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:53.598745  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:53.658921  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:53.822095  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:53.822140  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:53.948027  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:54.159720  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:54.319139  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:54.323475  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:54.449052  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:54.659950  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:54.800080  994709 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.201286682s)
	W1002 20:20:54.800158  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:54.800190  994709 retry.go:31] will retry after 36.238432013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:54.821361  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:54.822118  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:54.948234  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:55.160177  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:55.319580  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:55.323520  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:55.447562  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:55.659028  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:55.820055  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:55.822888  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:55.948043  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:56.160147  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:56.320399  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:56.322153  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:56.448568  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:56.662690  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:56.822552  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:56.822724  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:56.948654  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:57.165959  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:57.323611  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:57.324125  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:57.448839  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:57.659243  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:57.827311  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:57.827796  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:57.951325  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:58.160073  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:58.325194  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:58.325637  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:58.449778  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:58.663289  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:58.823656  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:58.824142  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:58.951729  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:59.159992  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:59.320856  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:59.322241  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:59.451389  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:59.659448  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:59.824351  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:59.824752  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:59.948244  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:00.178734  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:00.334811  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:00.335334  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:00.449977  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:00.660186  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:00.819874  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:00.820185  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:00.948376  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:01.159525  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:01.325608  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:01.326800  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:01.448685  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:01.660941  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:01.819636  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:01.822396  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:01.947837  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:02.160841  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:02.319889  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:02.323200  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:02.447592  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:02.663926  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:02.819507  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:02.822454  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:02.948180  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:03.158836  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:03.320854  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:03.322443  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:03.447975  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:03.658196  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:03.823965  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:03.824515  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:03.947809  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:04.160130  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:04.319792  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:04.320970  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:04.458399  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:04.659641  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:04.819337  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:04.821346  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:04.948487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:05.159402  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:05.318537  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:05.320782  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:05.447768  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:05.659047  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:05.820074  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:05.821224  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:05.948044  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:06.158918  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:06.319264  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:06.321170  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:06.448425  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:06.661071  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:06.819015  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:06.821112  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:06.948418  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:07.159287  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:07.320880  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:07.322732  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:07.448299  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:07.659089  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:07.833876  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:07.834240  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:07.948415  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:08.158976  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:08.320300  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:08.320874  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:21:08.448633  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:08.659076  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:08.820477  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:08.820621  994709 kapi.go:107] duration metric: took 1m24.50316116s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 20:21:08.948034  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:09.158956  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:09.319324  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:09.447625  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:09.660083  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:09.826440  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:09.949323  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:10.163992  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:10.320103  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:10.449195  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:10.658029  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:10.843087  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:10.948535  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:11.159397  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:11.319712  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:11.447769  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:11.659756  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:11.819109  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:11.947822  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:12.159549  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:12.319206  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:12.446918  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:12.658927  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:12.824411  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:12.947802  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:13.159449  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:13.318706  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:13.454138  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:13.658608  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:13.819013  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:13.948036  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:14.159253  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:14.319616  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:14.449075  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:14.662100  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:14.824454  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:14.950365  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:15.161131  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:15.319196  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:15.447530  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:15.663409  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:15.820874  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:15.953095  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:16.165487  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:16.319583  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:16.448606  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:16.659953  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:16.819503  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:16.975219  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:17.158372  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:17.318879  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:17.448192  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:17.658937  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:17.820351  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:17.947275  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:18.158790  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:18.319421  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:18.447567  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:18.659923  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:18.822375  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:18.947862  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:19.159020  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:19.319073  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:19.447850  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:19.659710  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:19.818515  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:19.947671  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:20.160392  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:20.318657  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:20.448137  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:20.660115  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:20.819099  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:20.951129  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:21.160373  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:21.325467  994709 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:21.449746  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:21.659955  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:21.819131  994709 kapi.go:107] duration metric: took 1m37.503635731s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 20:21:21.948370  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:22.158762  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:22.447738  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:22.658570  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:22.949101  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:23.158220  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:23.451919  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:23.658790  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:23.948375  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:24.159201  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:24.449117  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:24.659750  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:24.948295  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:25.160000  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:25.448116  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:25.658136  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:25.948058  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:26.158569  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:26.447775  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:26.658964  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:26.948377  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:27.159144  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:27.448069  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:27.658935  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:27.955751  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:28.159540  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:28.448912  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:28.662299  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:28.947885  994709 kapi.go:107] duration metric: took 1m41.503580566s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 20:21:28.951140  994709 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-693704 cluster.
	I1002 20:21:28.954142  994709 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 20:21:28.956995  994709 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 20:21:29.159855  994709 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:29.664073  994709 kapi.go:107] duration metric: took 1m45.009034533s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 20:21:31.039676  994709 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 20:21:31.852592  994709 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:21:31.852690  994709 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:21:31.856656  994709 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1002 20:21:31.859688  994709 addons.go:514] duration metric: took 1m53.876564642s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin nvidia-device-plugin storage-provisioner ingress-dns metrics-server registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1002 20:21:31.859739  994709 start.go:247] waiting for cluster config update ...
	I1002 20:21:31.859761  994709 start.go:256] writing updated cluster config ...
	I1002 20:21:31.860060  994709 ssh_runner.go:195] Run: rm -f paused
	I1002 20:21:31.863547  994709 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:21:31.867571  994709 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4kbq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.872068  994709 pod_ready.go:94] pod "coredns-66bc5c9577-4kbq4" is "Ready"
	I1002 20:21:31.872092  994709 pod_ready.go:86] duration metric: took 4.493776ms for pod "coredns-66bc5c9577-4kbq4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.874237  994709 pod_ready.go:83] waiting for pod "etcd-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.878256  994709 pod_ready.go:94] pod "etcd-addons-693704" is "Ready"
	I1002 20:21:31.878280  994709 pod_ready.go:86] duration metric: took 4.022961ms for pod "etcd-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.880276  994709 pod_ready.go:83] waiting for pod "kube-apiserver-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.885189  994709 pod_ready.go:94] pod "kube-apiserver-addons-693704" is "Ready"
	I1002 20:21:31.885218  994709 pod_ready.go:86] duration metric: took 4.915919ms for pod "kube-apiserver-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:31.887484  994709 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:32.267515  994709 pod_ready.go:94] pod "kube-controller-manager-addons-693704" is "Ready"
	I1002 20:21:32.267553  994709 pod_ready.go:86] duration metric: took 380.043461ms for pod "kube-controller-manager-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:32.468152  994709 pod_ready.go:83] waiting for pod "kube-proxy-gdxqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:32.869233  994709 pod_ready.go:94] pod "kube-proxy-gdxqs" is "Ready"
	I1002 20:21:32.869266  994709 pod_ready.go:86] duration metric: took 401.082172ms for pod "kube-proxy-gdxqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:33.067662  994709 pod_ready.go:83] waiting for pod "kube-scheduler-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:33.469284  994709 pod_ready.go:94] pod "kube-scheduler-addons-693704" is "Ready"
	I1002 20:21:33.469361  994709 pod_ready.go:86] duration metric: took 401.671243ms for pod "kube-scheduler-addons-693704" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:21:33.469380  994709 pod_ready.go:40] duration metric: took 1.605801066s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:21:33.530905  994709 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 20:21:33.534526  994709 out.go:179] * Done! kubectl is now configured to use "addons-693704" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 20:24:22 addons-693704 crio[828]: time="2025-10-02T20:24:22.153251471Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84 Namespace:local-path-storage ID:132d4bf582483728fbe1da9de28b37b53ca80083eb7b6a9a6059aebe5856633a UID:bf5e0fa0-b505-42e6-98e4-bbed23229c11 NetNS:/var/run/netns/f52e2632-63f0-4221-b5df-87894cfaabf6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001b235c0}] Aliases:map[]}"
	Oct 02 20:24:22 addons-693704 crio[828]: time="2025-10-02T20:24:22.153597103Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84 for CNI network kindnet (type=ptp)"
	Oct 02 20:24:22 addons-693704 crio[828]: time="2025-10-02T20:24:22.157274358Z" level=info msg="Ran pod sandbox 132d4bf582483728fbe1da9de28b37b53ca80083eb7b6a9a6059aebe5856633a with infra container: local-path-storage/helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84/POD" id=5ede4512-a8f4-4ead-84f8-176c2b2ecbbe name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 20:24:22 addons-693704 crio[828]: time="2025-10-02T20:24:22.159231657Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=2081ab0b-22cb-4452-96f2-b7895993fe93 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:24:22 addons-693704 crio[828]: time="2025-10-02T20:24:22.159391842Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=2081ab0b-22cb-4452-96f2-b7895993fe93 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:24:22 addons-693704 crio[828]: time="2025-10-02T20:24:22.159448251Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=2081ab0b-22cb-4452-96f2-b7895993fe93 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:24:32 addons-693704 crio[828]: time="2025-10-02T20:24:32.7715208Z" level=info msg="Stopping pod sandbox: 46d08202a04ab539d10fe2fbd2ac7e3a524f680b96d5417dfa728c0c162d9ab5" id=7575ccd5-7cb5-4e81-96ba-6fd5fd567fd2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 20:24:32 addons-693704 crio[828]: time="2025-10-02T20:24:32.771579744Z" level=info msg="Stopped pod sandbox (already stopped): 46d08202a04ab539d10fe2fbd2ac7e3a524f680b96d5417dfa728c0c162d9ab5" id=7575ccd5-7cb5-4e81-96ba-6fd5fd567fd2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 20:24:32 addons-693704 crio[828]: time="2025-10-02T20:24:32.772377869Z" level=info msg="Removing pod sandbox: 46d08202a04ab539d10fe2fbd2ac7e3a524f680b96d5417dfa728c0c162d9ab5" id=f0a36a73-df2f-4723-836c-24ab29e03b33 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 20:24:32 addons-693704 crio[828]: time="2025-10-02T20:24:32.776818025Z" level=info msg="Removed pod sandbox: 46d08202a04ab539d10fe2fbd2ac7e3a524f680b96d5417dfa728c0c162d9ab5" id=f0a36a73-df2f-4723-836c-24ab29e03b33 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 20:24:33 addons-693704 crio[828]: time="2025-10-02T20:24:33.183176382Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 02 20:25:03 addons-693704 crio[828]: time="2025-10-02T20:25:03.446159523Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=944fff5c-71e3-4e14-b4b7-444e23e9473e name=/runtime.v1.ImageService/PullImage
	Oct 02 20:25:03 addons-693704 crio[828]: time="2025-10-02T20:25:03.448421438Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Oct 02 20:25:35 addons-693704 crio[828]: time="2025-10-02T20:25:35.811561747Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Oct 02 20:26:06 addons-693704 crio[828]: time="2025-10-02T20:26:06.089875496Z" level=info msg="Pulling image: docker.io/nginx:latest" id=21014f01-c8f3-4d6b-82ef-0cc5114bd2d6 name=/runtime.v1.ImageService/PullImage
	Oct 02 20:26:06 addons-693704 crio[828]: time="2025-10-02T20:26:06.09236936Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 02 20:26:06 addons-693704 crio[828]: time="2025-10-02T20:26:06.596537996Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=f347cbd0-5725-4b75-b561-1dea00653a84 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:06 addons-693704 crio[828]: time="2025-10-02T20:26:06.596707116Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=f347cbd0-5725-4b75-b561-1dea00653a84 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:06 addons-693704 crio[828]: time="2025-10-02T20:26:06.596757445Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=f347cbd0-5725-4b75-b561-1dea00653a84 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:19 addons-693704 crio[828]: time="2025-10-02T20:26:19.747966118Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=cb7733cb-e2d6-4b44-a42a-d7e173bcc508 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:19 addons-693704 crio[828]: time="2025-10-02T20:26:19.748159705Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=cb7733cb-e2d6-4b44-a42a-d7e173bcc508 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:19 addons-693704 crio[828]: time="2025-10-02T20:26:19.748208131Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=cb7733cb-e2d6-4b44-a42a-d7e173bcc508 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:26:36 addons-693704 crio[828]: time="2025-10-02T20:26:36.373758435Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 02 20:27:06 addons-693704 crio[828]: time="2025-10-02T20:27:06.643610753Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=80ac12bc-c4b9-49ab-9f30-9bfc5d720786 name=/runtime.v1.ImageService/PullImage
	Oct 02 20:27:06 addons-693704 crio[828]: time="2025-10-02T20:27:06.646541053Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	0bc9f0d1b235e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          5 minutes ago       Running             busybox                                  0                   a4b1fc9c97e53       busybox                                    default
	6928dd54cd320       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          5 minutes ago       Running             csi-snapshotter                          0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	40761b95b2196       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 5 minutes ago       Running             gcp-auth                                 0                   9c1545073abea       gcp-auth-78565c9fb4-27djq                  gcp-auth
	8860f0e019516       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          5 minutes ago       Running             csi-provisioner                          0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	36c49020464e2       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            5 minutes ago       Running             liveness-probe                           0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	b7161126faae3       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           5 minutes ago       Running             hostpath                                 0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	b2b0003c8ca36       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             5 minutes ago       Running             controller                               0                   3a08c5d217c56       ingress-nginx-controller-9cc49f96f-9frwt   ingress-nginx
	2852575f20001       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:74b72c3673aff7e1fa7c3ebae80b5dbe5446ce1906ef8d4f98d4b9f6e72c88e1                            5 minutes ago       Running             gadget                                   0                   34878d06228a7       gadget-gljs2                               gadget
	ee97eb0b32c7f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                5 minutes ago       Running             node-driver-registrar                    0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	e42d2c0b7778e       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             5 minutes ago       Running             local-path-provisioner                   0                   b4f667a1ce299       local-path-provisioner-648f6765c9-v6khh    local-path-storage
	fc0714b2fd72f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              5 minutes ago       Running             registry-proxy                           0                   c8535afb414d5       registry-proxy-2kw45                       kube-system
	bca1297af7427       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   6 minutes ago       Exited              patch                                    0                   e925887ddf0d9       ingress-nginx-admission-patch-v6xpn        ingress-nginx
	627ce890f2b48       gcr.io/cloud-spanner-emulator/emulator@sha256:77d0cd8103fe32875bbb04c070a7d1db292093b65d11c99c00cf39e8a13852f5                               6 minutes ago       Running             cloud-spanner-emulator                   0                   49dda3c4634a4       cloud-spanner-emulator-85f6b7fc65-5wsmw    default
	16f4af5cddb75       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           6 minutes ago       Running             registry                                 0                   4bae41325f3f5       registry-66898fdd98-8rftt                  kube-system
	91fa943497ee5       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        6 minutes ago       Running             metrics-server                           0                   27cb63141e106       metrics-server-85b7d694d7-8pl6l            kube-system
	439510daf689e       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               6 minutes ago       Running             minikube-ingress-dns                     0                   e547aac4b280e       kube-ingress-dns-minikube                  kube-system
	063fa56393267       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              6 minutes ago       Running             csi-resizer                              0                   20ac69c0a7e28       csi-hostpath-resizer-0                     kube-system
	948a7498f368d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   6 minutes ago       Running             csi-external-health-monitor-controller   0                   1dfc0628d0d03       csi-hostpathplugin-kkptd                   kube-system
	bbd0c0fdbe948       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             6 minutes ago       Running             csi-attacher                             0                   e6f6a7809eb96       csi-hostpath-attacher-0                    kube-system
	697e9a6f92fb8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   6 minutes ago       Exited              create                                   0                   ec9abb5f653b7       ingress-nginx-admission-create-fndzf       ingress-nginx
	4a5b5d50e1426       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     6 minutes ago       Running             nvidia-device-plugin-ctr                 0                   ae9275c193e86       nvidia-device-plugin-daemonset-jblz6       kube-system
	4757a91ace2d4       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      6 minutes ago       Running             volume-snapshot-controller               0                   7cb6188e8093e       snapshot-controller-7d9fbc56b8-49h86       kube-system
	88520ea2c4ca7       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      6 minutes ago       Running             volume-snapshot-controller               0                   4de0d58fcc8d5       snapshot-controller-7d9fbc56b8-bw7rc       kube-system
	9390fd50f454e       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              6 minutes ago       Running             yakd                                     0                   a77b4648943e2       yakd-dashboard-5ff678cb9-b48gd             yakd-dashboard
	ec242b99be750       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             6 minutes ago       Running             coredns                                  0                   5e1993cbe5e41       coredns-66bc5c9577-4kbq4                   kube-system
	165a582582a89       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             6 minutes ago       Running             storage-provisioner                      0                   8b4b5f8349762       storage-provisioner                        kube-system
	cde8e7a8a028e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             7 minutes ago       Running             kindnet-cni                              0                   b1a33925c911a       kindnet-p9zvn                              kube-system
	0703880dcf265       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             7 minutes ago       Running             kube-proxy                               0                   18175bde14b29       kube-proxy-gdxqs                           kube-system
	972d6e9616c37       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             7 minutes ago       Running             etcd                                     0                   789f38c5890c2       etcd-addons-693704                         kube-system
	020148eb47c8c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             7 minutes ago       Running             kube-scheduler                           0                   3aa090880fcae       kube-scheduler-addons-693704               kube-system
	ab99c3bb8f644       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             7 minutes ago       Running             kube-controller-manager                  0                   629d2cf069469       kube-controller-manager-addons-693704      kube-system
	71c9ea9528918       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             7 minutes ago       Running             kube-apiserver                           0                   de4f0abfefce3       kube-apiserver-addons-693704               kube-system
	
	
	==> coredns [ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b] <==
	[INFO] 10.244.0.17:55859 - 34053 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.006721575s
	[INFO] 10.244.0.17:55859 - 46822 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000305001s
	[INFO] 10.244.0.17:55859 - 21325 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000282717s
	[INFO] 10.244.0.17:37045 - 20421 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000162088s
	[INFO] 10.244.0.17:37045 - 20651 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000128325s
	[INFO] 10.244.0.17:51048 - 61194 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000092519s
	[INFO] 10.244.0.17:51048 - 61672 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085027s
	[INFO] 10.244.0.17:57091 - 44872 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088334s
	[INFO] 10.244.0.17:57091 - 44684 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105589s
	[INFO] 10.244.0.17:59527 - 40959 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003459669s
	[INFO] 10.244.0.17:59527 - 41156 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003770241s
	[INFO] 10.244.0.17:59136 - 21305 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000142257s
	[INFO] 10.244.0.17:59136 - 21125 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093717s
	[INFO] 10.244.0.21:41484 - 12317 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000192315s
	[INFO] 10.244.0.21:60775 - 50484 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000142913s
	[INFO] 10.244.0.21:49862 - 44888 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127521s
	[INFO] 10.244.0.21:54840 - 52239 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149642s
	[INFO] 10.244.0.21:42560 - 6869 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000156624s
	[INFO] 10.244.0.21:41861 - 43315 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000298545s
	[INFO] 10.244.0.21:38412 - 8398 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002294645s
	[INFO] 10.244.0.21:40087 - 34579 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002408201s
	[INFO] 10.244.0.21:50163 - 3512 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.006805026s
	[INFO] 10.244.0.21:42501 - 46640 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.006618816s
	[INFO] 10.244.0.23:46061 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000191659s
	[INFO] 10.244.0.23:58330 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000122318s
	
	
	==> describe nodes <==
	Name:               addons-693704
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-693704
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=addons-693704
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_19_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-693704
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-693704"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:19:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-693704
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:27:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:24:58 +0000   Thu, 02 Oct 2025 20:19:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:24:58 +0000   Thu, 02 Oct 2025 20:19:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:24:58 +0000   Thu, 02 Oct 2025 20:19:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:24:58 +0000   Thu, 02 Oct 2025 20:20:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-693704
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 db645666b7ad4f1695da9df78e9fa367
	  System UUID:                021278b1-6d13-4d8b-91c7-a5de147567f7
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  default                     cloud-spanner-emulator-85f6b7fc65-5wsmw                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m26s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  gadget                      gadget-gljs2                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  gcp-auth                    gcp-auth-78565c9fb4-27djq                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m20s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-9frwt                      100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         7m23s
	  kube-system                 coredns-66bc5c9577-4kbq4                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m29s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	  kube-system                 csi-hostpathplugin-kkptd                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 etcd-addons-693704                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7m36s
	  kube-system                 kindnet-p9zvn                                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m30s
	  kube-system                 kube-apiserver-addons-693704                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m35s
	  kube-system                 kube-controller-manager-addons-693704                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m34s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 kube-proxy-gdxqs                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 kube-scheduler-addons-693704                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m34s
	  kube-system                 metrics-server-85b7d694d7-8pl6l                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         7m24s
	  kube-system                 nvidia-device-plugin-daemonset-jblz6                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 registry-66898fdd98-8rftt                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m25s
	  kube-system                 registry-creds-764b6fb674-6cg6b                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 registry-proxy-2kw45                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 snapshot-controller-7d9fbc56b8-49h86                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	  kube-system                 snapshot-controller-7d9fbc56b8-bw7rc                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  local-path-storage          helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m46s
	  local-path-storage          local-path-provisioner-648f6765c9-v6khh                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-b48gd                                0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     7m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m28s                  kube-proxy       
	  Normal   Starting                 7m41s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m41s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m41s (x8 over 7m41s)  kubelet          Node addons-693704 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m41s (x8 over 7m41s)  kubelet          Node addons-693704 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m41s (x8 over 7m41s)  kubelet          Node addons-693704 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m35s                  kubelet          Node addons-693704 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m35s                  kubelet          Node addons-693704 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m35s                  kubelet          Node addons-693704 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m31s                  node-controller  Node addons-693704 event: Registered Node addons-693704 in Controller
	  Normal   NodeReady                6m48s                  kubelet          Node addons-693704 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 19:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 20:17] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 20:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3] <==
	{"level":"warn","ts":"2025-10-02T20:19:28.781544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.806892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.814167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.836647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.852657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.878105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.886646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.904572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.925806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.935913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.956578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.971517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:28.993677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.031509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.041915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.068902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:29.157895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:45.092047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:19:45.118929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:06.895880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:06.909631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:07.000732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:20:07.017116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49460","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:20:36.364046Z","caller":"traceutil/trace.go:172","msg":"trace[1063042819] transaction","detail":"{read_only:false; response_revision:1036; number_of_response:1; }","duration":"113.56953ms","start":"2025-10-02T20:20:36.250465Z","end":"2025-10-02T20:20:36.364035Z","steps":["trace[1063042819] 'process raft request'  (duration: 56.881349ms)","trace[1063042819] 'compare'  (duration: 56.419938ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T20:20:36.365279Z","caller":"traceutil/trace.go:172","msg":"trace[29069078] transaction","detail":"{read_only:false; response_revision:1037; number_of_response:1; }","duration":"104.71736ms","start":"2025-10-02T20:20:36.259205Z","end":"2025-10-02T20:20:36.363922Z","steps":["trace[29069078] 'process raft request'  (duration: 104.653649ms)"],"step_count":1}
	
	
	==> gcp-auth [40761b95b219669fa13be3f37e9874311bcd42514e92101fcec6f883bf46c837] <==
	2025/10/02 20:21:27 GCP Auth Webhook started!
	2025/10/02 20:21:34 Ready to marshal response ...
	2025/10/02 20:21:34 Ready to write response ...
	2025/10/02 20:21:34 Ready to marshal response ...
	2025/10/02 20:21:34 Ready to write response ...
	2025/10/02 20:21:34 Ready to marshal response ...
	2025/10/02 20:21:34 Ready to write response ...
	2025/10/02 20:21:55 Ready to marshal response ...
	2025/10/02 20:21:55 Ready to write response ...
	2025/10/02 20:21:59 Ready to marshal response ...
	2025/10/02 20:21:59 Ready to write response ...
	2025/10/02 20:22:06 Ready to marshal response ...
	2025/10/02 20:22:06 Ready to write response ...
	2025/10/02 20:22:06 Ready to marshal response ...
	2025/10/02 20:22:06 Ready to write response ...
	2025/10/02 20:24:21 Ready to marshal response ...
	2025/10/02 20:24:21 Ready to write response ...
	2025/10/02 20:26:52 Ready to marshal response ...
	2025/10/02 20:26:52 Ready to write response ...
	
	
	==> kernel <==
	 20:27:07 up  5:09,  0 user,  load average: 0.82, 1.43, 2.55
	Linux addons-693704 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0] <==
	I1002 20:24:58.908578       1 main.go:301] handling current node
	I1002 20:25:08.912213       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:25:08.912320       1 main.go:301] handling current node
	I1002 20:25:18.907634       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:25:18.907669       1 main.go:301] handling current node
	I1002 20:25:28.914475       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:25:28.914591       1 main.go:301] handling current node
	I1002 20:25:38.908394       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:25:38.908433       1 main.go:301] handling current node
	I1002 20:25:48.907613       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:25:48.907646       1 main.go:301] handling current node
	I1002 20:25:58.910119       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:25:58.910161       1 main.go:301] handling current node
	I1002 20:26:08.907658       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:08.907789       1 main.go:301] handling current node
	I1002 20:26:18.914211       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:18.914253       1 main.go:301] handling current node
	I1002 20:26:28.911384       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:28.911494       1 main.go:301] handling current node
	I1002 20:26:38.914130       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:38.914165       1 main.go:301] handling current node
	I1002 20:26:48.907640       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:48.907681       1 main.go:301] handling current node
	I1002 20:26:58.908673       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:26:58.908721       1 main.go:301] handling current node
	
	
	==> kube-apiserver [71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba] <==
	I1002 20:20:43.745558       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 20:21:08.431186       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:21:08.431257       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1002 20:21:08.431339       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.13.34:443: connect: connection refused" logger="UnhandledError"
	E1002 20:21:08.433865       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.13.34:443: connect: connection refused" logger="UnhandledError"
	W1002 20:21:09.431415       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:21:09.431457       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 20:21:09.431472       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 20:21:09.431507       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:21:09.431564       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 20:21:09.432661       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 20:21:13.450452       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:21:13.450503       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1002 20:21:13.450794       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.13.34:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1002 20:21:13.499856       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1002 20:21:44.668705       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43290: use of closed network connection
	
	
	==> kube-controller-manager [ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c] <==
	I1002 20:19:36.927821       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 20:19:36.927907       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-693704"
	I1002 20:19:36.927948       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 20:19:36.927971       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 20:19:36.929043       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 20:19:36.929089       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 20:19:36.929104       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 20:19:36.929196       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 20:19:36.929242       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 20:19:36.930939       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 20:19:36.953633       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 20:19:36.957922       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 20:19:42.958900       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 20:20:06.887630       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 20:20:06.887888       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1002 20:20:06.887954       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 20:20:06.966287       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 20:20:06.978573       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 20:20:06.989795       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:20:07.080038       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:20:21.939957       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1002 20:20:36.994429       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 20:20:37.091221       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1002 20:21:07.000284       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 20:21:07.098427       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1] <==
	I1002 20:19:38.989384       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:19:39.087738       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:19:39.188580       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:19:39.188619       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:19:39.188702       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:19:39.263259       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:19:39.267990       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:19:39.278942       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:19:39.279269       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:19:39.279289       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:19:39.289355       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:19:39.289374       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:19:39.289655       1 config.go:200] "Starting service config controller"
	I1002 20:19:39.289662       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:19:39.289995       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:19:39.290002       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:19:39.290636       1 config.go:309] "Starting node config controller"
	I1002 20:19:39.290645       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:19:39.290651       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:19:39.390091       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 20:19:39.390138       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:19:39.390179       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251] <==
	E1002 20:19:30.082976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 20:19:30.083025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 20:19:30.083075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:19:30.083123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 20:19:30.083172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 20:19:30.083221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 20:19:30.083269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:19:30.083318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:19:30.083367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:19:30.083415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:19:30.083460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:19:30.083513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 20:19:30.083555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 20:19:30.083651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 20:19:30.083692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 20:19:30.083739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:19:30.086243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 20:19:30.905348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 20:19:30.932288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:19:30.964617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 20:19:30.984039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 20:19:31.017892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:19:31.036527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:19:31.063255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1002 20:19:31.603691       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 20:24:29 addons-693704 kubelet[1282]: E1002 20:24:29.526916    1282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d16ac5e8-a382-4faa-85dc-039ac18fa4cf-gcr-creds podName:d16ac5e8-a382-4faa-85dc-039ac18fa4cf nodeName:}" failed. No retries permitted until 2025-10-02 20:26:31.526897117 +0000 UTC m=+418.937030092 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/d16ac5e8-a382-4faa-85dc-039ac18fa4cf-gcr-creds") pod "registry-creds-764b6fb674-6cg6b" (UID: "d16ac5e8-a382-4faa-85dc-039ac18fa4cf") : secret "registry-creds-gcr" not found
	Oct 02 20:24:37 addons-693704 kubelet[1282]: E1002 20:24:37.747385    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-6cg6b" podUID="d16ac5e8-a382-4faa-85dc-039ac18fa4cf"
	Oct 02 20:24:38 addons-693704 kubelet[1282]: I1002 20:24:38.746794    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jblz6" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:25:03 addons-693704 kubelet[1282]: E1002 20:25:03.445393    1282 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:25:03 addons-693704 kubelet[1282]: E1002 20:25:03.445458    1282 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:25:03 addons-693704 kubelet[1282]: E1002 20:25:03.445651    1282 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(0a97c0d4-0277-4225-81aa-39349ced9b52): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:25:03 addons-693704 kubelet[1282]: E1002 20:25:03.445694    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0a97c0d4-0277-4225-81aa-39349ced9b52"
	Oct 02 20:25:18 addons-693704 kubelet[1282]: E1002 20:25:18.747469    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0a97c0d4-0277-4225-81aa-39349ced9b52"
	Oct 02 20:25:22 addons-693704 kubelet[1282]: I1002 20:25:22.747859    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-8rftt" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:25:39 addons-693704 kubelet[1282]: I1002 20:25:39.747004    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2kw45" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:25:41 addons-693704 kubelet[1282]: I1002 20:25:41.747057    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jblz6" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:26:06 addons-693704 kubelet[1282]: E1002 20:26:06.089045    1282 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: reading manifest sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:26:06 addons-693704 kubelet[1282]: E1002 20:26:06.089147    1282 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: reading manifest sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 20:26:06 addons-693704 kubelet[1282]: E1002 20:26:06.089357    1282 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84_local-path-storage(bf5e0fa0-b505-42e6-98e4-bbed23229c11): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: reading manifest sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com
/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:26:06 addons-693704 kubelet[1282]: E1002 20:26:06.089403    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: reading manifest sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-58930
23c8c84" podUID="bf5e0fa0-b505-42e6-98e4-bbed23229c11"
	Oct 02 20:26:06 addons-693704 kubelet[1282]: E1002 20:26:06.597075    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: reading manifest sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthe
nticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84" podUID="bf5e0fa0-b505-42e6-98e4-bbed23229c11"
	Oct 02 20:26:31 addons-693704 kubelet[1282]: E1002 20:26:31.546511    1282 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 02 20:26:31 addons-693704 kubelet[1282]: E1002 20:26:31.546606    1282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d16ac5e8-a382-4faa-85dc-039ac18fa4cf-gcr-creds podName:d16ac5e8-a382-4faa-85dc-039ac18fa4cf nodeName:}" failed. No retries permitted until 2025-10-02 20:28:33.546586929 +0000 UTC m=+540.956719904 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/d16ac5e8-a382-4faa-85dc-039ac18fa4cf-gcr-creds") pod "registry-creds-764b6fb674-6cg6b" (UID: "d16ac5e8-a382-4faa-85dc-039ac18fa4cf") : secret "registry-creds-gcr" not found
	Oct 02 20:26:38 addons-693704 kubelet[1282]: I1002 20:26:38.746728    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-8rftt" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:26:47 addons-693704 kubelet[1282]: I1002 20:26:47.747118    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2kw45" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:26:52 addons-693704 kubelet[1282]: E1002 20:26:52.747795    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-6cg6b" podUID="d16ac5e8-a382-4faa-85dc-039ac18fa4cf"
	Oct 02 20:27:06 addons-693704 kubelet[1282]: E1002 20:27:06.641735    1282 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:27:06 addons-693704 kubelet[1282]: E1002 20:27:06.641811    1282 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:27:06 addons-693704 kubelet[1282]: E1002 20:27:06.642025    1282 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(0a97c0d4-0277-4225-81aa-39349ced9b52): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:27:06 addons-693704 kubelet[1282]: E1002 20:27:06.642096    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0a97c0d4-0277-4225-81aa-39349ced9b52"
	
	
	==> storage-provisioner [165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa] <==
	W1002 20:26:42.569451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:44.572709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:44.577239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:46.580460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:46.587292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:48.590705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:48.602238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:50.605227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:50.610685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:52.613668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:52.618154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:54.621370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:54.625844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:56.629514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:56.636324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:58.639418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:26:58.644028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:00.647127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:00.651440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:02.654951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:02.659581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:04.662127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:04.666292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:06.676513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:27:06.682250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-693704 -n addons-693704
helpers_test.go:269: (dbg) Run:  kubectl --context addons-693704 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn registry-creds-764b6fb674-6cg6b helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-693704 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn registry-creds-764b6fb674-6cg6b helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-693704 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn registry-creds-764b6fb674-6cg6b helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84: exit status 1 (109.88054ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-693704/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:21:59 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-78xtg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-78xtg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m9s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-693704
	  Warning  Failed     4m6s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    110s (x2 over 4m6s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     110s (x2 over 4m6s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    98s (x3 over 5m9s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2s (x3 over 4m6s)    kubelet            Error: ErrImagePull
	  Warning  Failed     2s (x2 over 2m5s)    kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t66j5 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-t66j5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fndzf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-v6xpn" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-6cg6b" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-693704 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-fndzf ingress-nginx-admission-patch-v6xpn registry-creds-764b6fb674-6cg6b helper-pod-create-pvc-ac97be76-61a8-4bcf-9eaf-5893023c8c84: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-693704 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (261.623269ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:27:08.955827 1004087 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:27:08.956699 1004087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:08.956714 1004087 out.go:374] Setting ErrFile to fd 2...
	I1002 20:27:08.956720 1004087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:08.957036 1004087 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:27:08.957346 1004087 mustload.go:65] Loading cluster: addons-693704
	I1002 20:27:08.957709 1004087 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:27:08.957724 1004087 addons.go:606] checking whether the cluster is paused
	I1002 20:27:08.957823 1004087 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:27:08.957916 1004087 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:27:08.958423 1004087 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:27:08.976757 1004087 ssh_runner.go:195] Run: systemctl --version
	I1002 20:27:08.976811 1004087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:27:08.995147 1004087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:27:09.092514 1004087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:27:09.092621 1004087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:27:09.131568 1004087 cri.go:89] found id: "6928dd54cd320350a0574bce9e99e262d52ef2ad32927799eb1e85f4f8e93f52"
	I1002 20:27:09.131596 1004087 cri.go:89] found id: "8860f0e019516ca00a020669c8250eb699147392f2fd0ab332eef75f31694ae5"
	I1002 20:27:09.131602 1004087 cri.go:89] found id: "36c49020464e244780926156984b457b4f46db8eacc0cd8fc4a9c835da280358"
	I1002 20:27:09.131606 1004087 cri.go:89] found id: "b7161126faae3543c45089c1c2a743e531e0d2ec905126b94b00abc8b3f4e86b"
	I1002 20:27:09.131609 1004087 cri.go:89] found id: "ee97eb0b32c7fe49fe1575e1d85aaabcb7c2a4c12bc8d6e96ec8ec1ace15eb83"
	I1002 20:27:09.131612 1004087 cri.go:89] found id: "fc0714b2fd72f680a79055bd40db196753ebd675e00b57fdbbb1311e6f52f0c6"
	I1002 20:27:09.131615 1004087 cri.go:89] found id: "16f4af5cddb751e7f70cd54a1b075a0e52070ec2ab609d7dd4b6382625d77f3c"
	I1002 20:27:09.131618 1004087 cri.go:89] found id: "91fa943497ee5a780fabd8bb6f6a4c48256ff45ea9d2abbfd6fe7173c099a981"
	I1002 20:27:09.131621 1004087 cri.go:89] found id: "439510daf689e871d24092dcfc8fc87def85821aa672fe72a12cd5370bc3106f"
	I1002 20:27:09.131650 1004087 cri.go:89] found id: "063fa56393267278e05822086a20d260f8a6eab5ccf9daa3feabae6e691670f2"
	I1002 20:27:09.131658 1004087 cri.go:89] found id: "948a7498f368d7c9cddb6833384a3cb174bb78753aa09668ec42b4d2860ebf4e"
	I1002 20:27:09.131661 1004087 cri.go:89] found id: "bbd0c0fdbe9489f01321a2881ee0a43621fde9603022a8e6e320b98adde72078"
	I1002 20:27:09.131664 1004087 cri.go:89] found id: "4a5b5d50e1426c0038720273cc7b8abbf6622876cd1ba10fda5cf36d7b50e6ab"
	I1002 20:27:09.131667 1004087 cri.go:89] found id: "4757a91ace2d4f7cb4fd077cfa0e537a6b0a1a55cfca1ddd7afa9cdee5de2849"
	I1002 20:27:09.131670 1004087 cri.go:89] found id: "88520ea2c4ca73535bd06329fc29000af61986adf6366f19c9a796b83180d153"
	I1002 20:27:09.131678 1004087 cri.go:89] found id: "ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b"
	I1002 20:27:09.131687 1004087 cri.go:89] found id: "165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa"
	I1002 20:27:09.131691 1004087 cri.go:89] found id: "cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0"
	I1002 20:27:09.131695 1004087 cri.go:89] found id: "0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1"
	I1002 20:27:09.131698 1004087 cri.go:89] found id: "972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3"
	I1002 20:27:09.131703 1004087 cri.go:89] found id: "020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251"
	I1002 20:27:09.131718 1004087 cri.go:89] found id: "ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c"
	I1002 20:27:09.131727 1004087 cri.go:89] found id: "71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba"
	I1002 20:27:09.131730 1004087 cri.go:89] found id: ""
	I1002 20:27:09.131791 1004087 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 20:27:09.146884 1004087 out.go:203] 
	W1002 20:27:09.149749 1004087 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:27:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:27:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 20:27:09.149772 1004087 out.go:285] * 
	* 
	W1002 20:27:09.157388 1004087 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:27:09.160304 1004087 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-693704 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (303.29s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-jblz6" [a8f6f2a6-5e4e-4b3e-a4fd-b25fe6991b37] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004000599s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-693704 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (260.312251ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:22:05.667655 1001117 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:22:05.668567 1001117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:22:05.668585 1001117 out.go:374] Setting ErrFile to fd 2...
	I1002 20:22:05.668593 1001117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:22:05.669083 1001117 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:22:05.669397 1001117 mustload.go:65] Loading cluster: addons-693704
	I1002 20:22:05.669756 1001117 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:22:05.669774 1001117 addons.go:606] checking whether the cluster is paused
	I1002 20:22:05.669872 1001117 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:22:05.669893 1001117 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:22:05.670401 1001117 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:22:05.688720 1001117 ssh_runner.go:195] Run: systemctl --version
	I1002 20:22:05.688783 1001117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:22:05.708594 1001117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:22:05.813619 1001117 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:22:05.813724 1001117 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:22:05.850889 1001117 cri.go:89] found id: "6928dd54cd320350a0574bce9e99e262d52ef2ad32927799eb1e85f4f8e93f52"
	I1002 20:22:05.850917 1001117 cri.go:89] found id: "8860f0e019516ca00a020669c8250eb699147392f2fd0ab332eef75f31694ae5"
	I1002 20:22:05.850923 1001117 cri.go:89] found id: "36c49020464e244780926156984b457b4f46db8eacc0cd8fc4a9c835da280358"
	I1002 20:22:05.850926 1001117 cri.go:89] found id: "b7161126faae3543c45089c1c2a743e531e0d2ec905126b94b00abc8b3f4e86b"
	I1002 20:22:05.850930 1001117 cri.go:89] found id: "ee97eb0b32c7fe49fe1575e1d85aaabcb7c2a4c12bc8d6e96ec8ec1ace15eb83"
	I1002 20:22:05.850933 1001117 cri.go:89] found id: "fc0714b2fd72f680a79055bd40db196753ebd675e00b57fdbbb1311e6f52f0c6"
	I1002 20:22:05.850936 1001117 cri.go:89] found id: "16f4af5cddb751e7f70cd54a1b075a0e52070ec2ab609d7dd4b6382625d77f3c"
	I1002 20:22:05.850939 1001117 cri.go:89] found id: "91fa943497ee5a780fabd8bb6f6a4c48256ff45ea9d2abbfd6fe7173c099a981"
	I1002 20:22:05.850941 1001117 cri.go:89] found id: "439510daf689e871d24092dcfc8fc87def85821aa672fe72a12cd5370bc3106f"
	I1002 20:22:05.850949 1001117 cri.go:89] found id: "063fa56393267278e05822086a20d260f8a6eab5ccf9daa3feabae6e691670f2"
	I1002 20:22:05.850953 1001117 cri.go:89] found id: "948a7498f368d7c9cddb6833384a3cb174bb78753aa09668ec42b4d2860ebf4e"
	I1002 20:22:05.850956 1001117 cri.go:89] found id: "bbd0c0fdbe9489f01321a2881ee0a43621fde9603022a8e6e320b98adde72078"
	I1002 20:22:05.850959 1001117 cri.go:89] found id: "4a5b5d50e1426c0038720273cc7b8abbf6622876cd1ba10fda5cf36d7b50e6ab"
	I1002 20:22:05.850962 1001117 cri.go:89] found id: "4757a91ace2d4f7cb4fd077cfa0e537a6b0a1a55cfca1ddd7afa9cdee5de2849"
	I1002 20:22:05.850965 1001117 cri.go:89] found id: "88520ea2c4ca73535bd06329fc29000af61986adf6366f19c9a796b83180d153"
	I1002 20:22:05.850970 1001117 cri.go:89] found id: "ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b"
	I1002 20:22:05.850973 1001117 cri.go:89] found id: "165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa"
	I1002 20:22:05.850977 1001117 cri.go:89] found id: "cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0"
	I1002 20:22:05.850980 1001117 cri.go:89] found id: "0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1"
	I1002 20:22:05.850983 1001117 cri.go:89] found id: "972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3"
	I1002 20:22:05.850987 1001117 cri.go:89] found id: "020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251"
	I1002 20:22:05.850996 1001117 cri.go:89] found id: "ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c"
	I1002 20:22:05.851000 1001117 cri.go:89] found id: "71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba"
	I1002 20:22:05.851003 1001117 cri.go:89] found id: ""
	I1002 20:22:05.851073 1001117 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 20:22:05.864756 1001117 out.go:203] 
	W1002 20:22:05.865921 1001117 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:22:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:22:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 20:22:05.865943 1001117 out.go:285] * 
	* 
	W1002 20:22:05.873632 1001117 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:22:05.875154 1001117 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-693704 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-b48gd" [b06121f2-995e-4dae-821c-64b426a3c1c8] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003361702s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-693704 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-693704 addons disable yakd --alsologtostderr -v=1: exit status 11 (254.145034ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:21:51.501297 1000805 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:21:51.502761 1000805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:21:51.502784 1000805 out.go:374] Setting ErrFile to fd 2...
	I1002 20:21:51.502790 1000805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:21:51.503101 1000805 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:21:51.503584 1000805 mustload.go:65] Loading cluster: addons-693704
	I1002 20:21:51.504023 1000805 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:21:51.504048 1000805 addons.go:606] checking whether the cluster is paused
	I1002 20:21:51.504162 1000805 config.go:182] Loaded profile config "addons-693704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:21:51.504177 1000805 host.go:66] Checking if "addons-693704" exists ...
	I1002 20:21:51.504619 1000805 cli_runner.go:164] Run: docker container inspect addons-693704 --format={{.State.Status}}
	I1002 20:21:51.523545 1000805 ssh_runner.go:195] Run: systemctl --version
	I1002 20:21:51.523605 1000805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-693704
	I1002 20:21:51.543116 1000805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/addons-693704/id_rsa Username:docker}
	I1002 20:21:51.640473 1000805 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:21:51.640582 1000805 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:21:51.670571 1000805 cri.go:89] found id: "6928dd54cd320350a0574bce9e99e262d52ef2ad32927799eb1e85f4f8e93f52"
	I1002 20:21:51.670594 1000805 cri.go:89] found id: "8860f0e019516ca00a020669c8250eb699147392f2fd0ab332eef75f31694ae5"
	I1002 20:21:51.670600 1000805 cri.go:89] found id: "36c49020464e244780926156984b457b4f46db8eacc0cd8fc4a9c835da280358"
	I1002 20:21:51.670604 1000805 cri.go:89] found id: "b7161126faae3543c45089c1c2a743e531e0d2ec905126b94b00abc8b3f4e86b"
	I1002 20:21:51.670620 1000805 cri.go:89] found id: "ee97eb0b32c7fe49fe1575e1d85aaabcb7c2a4c12bc8d6e96ec8ec1ace15eb83"
	I1002 20:21:51.670624 1000805 cri.go:89] found id: "fc0714b2fd72f680a79055bd40db196753ebd675e00b57fdbbb1311e6f52f0c6"
	I1002 20:21:51.670629 1000805 cri.go:89] found id: "16f4af5cddb751e7f70cd54a1b075a0e52070ec2ab609d7dd4b6382625d77f3c"
	I1002 20:21:51.670632 1000805 cri.go:89] found id: "91fa943497ee5a780fabd8bb6f6a4c48256ff45ea9d2abbfd6fe7173c099a981"
	I1002 20:21:51.670636 1000805 cri.go:89] found id: "439510daf689e871d24092dcfc8fc87def85821aa672fe72a12cd5370bc3106f"
	I1002 20:21:51.670642 1000805 cri.go:89] found id: "063fa56393267278e05822086a20d260f8a6eab5ccf9daa3feabae6e691670f2"
	I1002 20:21:51.670647 1000805 cri.go:89] found id: "948a7498f368d7c9cddb6833384a3cb174bb78753aa09668ec42b4d2860ebf4e"
	I1002 20:21:51.670650 1000805 cri.go:89] found id: "bbd0c0fdbe9489f01321a2881ee0a43621fde9603022a8e6e320b98adde72078"
	I1002 20:21:51.670653 1000805 cri.go:89] found id: "4a5b5d50e1426c0038720273cc7b8abbf6622876cd1ba10fda5cf36d7b50e6ab"
	I1002 20:21:51.670657 1000805 cri.go:89] found id: "4757a91ace2d4f7cb4fd077cfa0e537a6b0a1a55cfca1ddd7afa9cdee5de2849"
	I1002 20:21:51.670664 1000805 cri.go:89] found id: "88520ea2c4ca73535bd06329fc29000af61986adf6366f19c9a796b83180d153"
	I1002 20:21:51.670669 1000805 cri.go:89] found id: "ec242b99be750793241756b433c42b49a677122dcce78bb16c5317447a840d6b"
	I1002 20:21:51.670673 1000805 cri.go:89] found id: "165a582582a89668e5e87c56585b29fdcf40b56f43d062a640deb28c25c7b9aa"
	I1002 20:21:51.670676 1000805 cri.go:89] found id: "cde8e7a8a028eb53194e3242fa5b1ca959be4caa5f3dfcc54651f6870e6339b0"
	I1002 20:21:51.670679 1000805 cri.go:89] found id: "0703880dcf2658284b3b9d5ce7c8ee7a27205f93255a45c047cce3a5004884c1"
	I1002 20:21:51.670683 1000805 cri.go:89] found id: "972d6e9616c37d583b89cca1075227c1a16f0690e39e54fa131202e2d4c861e3"
	I1002 20:21:51.670688 1000805 cri.go:89] found id: "020148eb47c8c26b83fbbc22311b7e5d10e8335e7b56b0d3b9e40f7838717251"
	I1002 20:21:51.670694 1000805 cri.go:89] found id: "ab99c3bb8f644e3ea3e6c8c849244462e9e46e7edf9e26eefcb9a4270e2f171c"
	I1002 20:21:51.670697 1000805 cri.go:89] found id: "71c9ea95289182c66dd7206e470df0d7d2274078d9e5aadac349425af66e99ba"
	I1002 20:21:51.670700 1000805 cri.go:89] found id: ""
	I1002 20:21:51.670752 1000805 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 20:21:51.686258 1000805 out.go:203] 
	W1002 20:21:51.689089 1000805 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:21:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:21:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 20:21:51.689117 1000805 out.go:285] * 
	* 
	W1002 20:21:51.696999 1000805 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:21:51.700235 1000805 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-693704 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestForceSystemdFlag (516.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-987043 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-987043 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m32.245318968s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-987043] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-987043" primary control-plane node in "force-systemd-flag-987043" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:39:29.564168 1161551 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:39:29.564296 1161551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:39:29.564312 1161551 out.go:374] Setting ErrFile to fd 2...
	I1002 21:39:29.564318 1161551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:39:29.564696 1161551 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:39:29.565158 1161551 out.go:368] Setting JSON to false
	I1002 21:39:29.566113 1161551 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":22907,"bootTime":1759418263,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:39:29.566184 1161551 start.go:140] virtualization:  
	I1002 21:39:29.569797 1161551 out.go:179] * [force-systemd-flag-987043] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:39:29.574188 1161551 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:39:29.574339 1161551 notify.go:221] Checking for updates...
	I1002 21:39:29.580599 1161551 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:39:29.583634 1161551 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:39:29.586786 1161551 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:39:29.589902 1161551 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:39:29.592993 1161551 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:39:29.596681 1161551 config.go:182] Loaded profile config "kubernetes-upgrade-840583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:39:29.596844 1161551 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:39:29.627042 1161551 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:39:29.627161 1161551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:39:29.680065 1161551 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:39:29.671007529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:39:29.680176 1161551 docker.go:319] overlay module found
	I1002 21:39:29.683421 1161551 out.go:179] * Using the docker driver based on user configuration
	I1002 21:39:29.686520 1161551 start.go:306] selected driver: docker
	I1002 21:39:29.686543 1161551 start.go:936] validating driver "docker" against <nil>
	I1002 21:39:29.686556 1161551 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:39:29.687322 1161551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:39:29.748142 1161551 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:39:29.738447157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:39:29.748305 1161551 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:39:29.748548 1161551 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 21:39:29.751790 1161551 out.go:179] * Using Docker driver with root privileges
	I1002 21:39:29.754825 1161551 cni.go:84] Creating CNI manager for ""
	I1002 21:39:29.754904 1161551 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:39:29.754914 1161551 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:39:29.755552 1161551 start.go:350] cluster config:
	{Name:force-systemd-flag-987043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-987043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:39:29.760305 1161551 out.go:179] * Starting "force-systemd-flag-987043" primary control-plane node in "force-systemd-flag-987043" cluster
	I1002 21:39:29.763969 1161551 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:39:29.766845 1161551 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:39:29.769594 1161551 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:39:29.769663 1161551 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:39:29.769677 1161551 cache.go:59] Caching tarball of preloaded images
	I1002 21:39:29.769691 1161551 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:39:29.769771 1161551 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:39:29.769781 1161551 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:39:29.769881 1161551 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/config.json ...
	I1002 21:39:29.769898 1161551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/config.json: {Name:mk82cafd76304435639e0bf193e90dbf640e8614 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:39:29.788839 1161551 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:39:29.788874 1161551 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:39:29.788887 1161551 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:39:29.788909 1161551 start.go:361] acquireMachinesLock for force-systemd-flag-987043: {Name:mkbeba4b2ebfe55e6038b7b34f46f730d9440489 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:39:29.789011 1161551 start.go:365] duration metric: took 81.524µs to acquireMachinesLock for "force-systemd-flag-987043"
	I1002 21:39:29.789043 1161551 start.go:94] Provisioning new machine with config: &{Name:force-systemd-flag-987043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-987043 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:39:29.789106 1161551 start.go:126] createHost starting for "" (driver="docker")
	I1002 21:39:29.792387 1161551 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:39:29.792598 1161551 start.go:160] libmachine.API.Create for "force-systemd-flag-987043" (driver="docker")
	I1002 21:39:29.792639 1161551 client.go:168] LocalClient.Create starting
	I1002 21:39:29.792729 1161551 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem
	I1002 21:39:29.792769 1161551 main.go:141] libmachine: Decoding PEM data...
	I1002 21:39:29.792786 1161551 main.go:141] libmachine: Parsing certificate...
	I1002 21:39:29.792839 1161551 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem
	I1002 21:39:29.792861 1161551 main.go:141] libmachine: Decoding PEM data...
	I1002 21:39:29.792880 1161551 main.go:141] libmachine: Parsing certificate...
	I1002 21:39:29.793240 1161551 cli_runner.go:164] Run: docker network inspect force-systemd-flag-987043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:39:29.809069 1161551 cli_runner.go:211] docker network inspect force-systemd-flag-987043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:39:29.809154 1161551 network_create.go:284] running [docker network inspect force-systemd-flag-987043] to gather additional debugging logs...
	I1002 21:39:29.809175 1161551 cli_runner.go:164] Run: docker network inspect force-systemd-flag-987043
	W1002 21:39:29.827308 1161551 cli_runner.go:211] docker network inspect force-systemd-flag-987043 returned with exit code 1
	I1002 21:39:29.827337 1161551 network_create.go:287] error running [docker network inspect force-systemd-flag-987043]: docker network inspect force-systemd-flag-987043: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-987043 not found
	I1002 21:39:29.827364 1161551 network_create.go:289] output of [docker network inspect force-systemd-flag-987043]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-987043 not found
	
	** /stderr **
	I1002 21:39:29.827464 1161551 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:39:29.845146 1161551 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c06a83b4618b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:65:5b:88:54:7f} reservation:<nil>}
	I1002 21:39:29.845542 1161551 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-478a83a9ba8a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:87:10:56:a0:1c} reservation:<nil>}
	I1002 21:39:29.845790 1161551 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb579a1208f3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:a8:08:2c:81:8c} reservation:<nil>}
	I1002 21:39:29.846153 1161551 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2abe5b67d660 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:7b:d9:d2:4c:25} reservation:<nil>}
	I1002 21:39:29.846669 1161551 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a5ff0}
	I1002 21:39:29.846696 1161551 network_create.go:124] attempt to create docker network force-systemd-flag-987043 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 21:39:29.846756 1161551 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-987043 force-systemd-flag-987043
	I1002 21:39:29.906350 1161551 network_create.go:108] docker network force-systemd-flag-987043 192.168.85.0/24 created
	I1002 21:39:29.906383 1161551 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-987043" container
	I1002 21:39:29.906465 1161551 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:39:29.922769 1161551 cli_runner.go:164] Run: docker volume create force-systemd-flag-987043 --label name.minikube.sigs.k8s.io=force-systemd-flag-987043 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:39:29.939340 1161551 oci.go:103] Successfully created a docker volume force-systemd-flag-987043
	I1002 21:39:29.939465 1161551 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-987043-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-987043 --entrypoint /usr/bin/test -v force-systemd-flag-987043:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:39:31.645128 1161551 cli_runner.go:217] Completed: docker run --rm --name force-systemd-flag-987043-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-987043 --entrypoint /usr/bin/test -v force-systemd-flag-987043:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (1.705618069s)
	I1002 21:39:31.645167 1161551 oci.go:107] Successfully prepared a docker volume force-systemd-flag-987043
	I1002 21:39:31.645196 1161551 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:39:31.645220 1161551 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:39:31.645314 1161551 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-987043:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:39:36.051471 1161551 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-987043:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.406107045s)
	I1002 21:39:36.051505 1161551 kic.go:203] duration metric: took 4.406281169s to extract preloaded images to volume ...
	W1002 21:39:36.051645 1161551 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:39:36.051769 1161551 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:39:36.117231 1161551 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-987043 --name force-systemd-flag-987043 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-987043 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-987043 --network force-systemd-flag-987043 --ip 192.168.85.2 --volume force-systemd-flag-987043:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:39:36.412143 1161551 cli_runner.go:164] Run: docker container inspect force-systemd-flag-987043 --format={{.State.Running}}
	I1002 21:39:36.435818 1161551 cli_runner.go:164] Run: docker container inspect force-systemd-flag-987043 --format={{.State.Status}}
	I1002 21:39:36.461637 1161551 cli_runner.go:164] Run: docker exec force-systemd-flag-987043 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:39:36.512191 1161551 oci.go:144] the created container "force-systemd-flag-987043" has a running status.
	I1002 21:39:36.512220 1161551 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-flag-987043/id_rsa...
	I1002 21:39:36.826706 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-flag-987043/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:39:36.826746 1161551 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-flag-987043/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:39:36.849954 1161551 cli_runner.go:164] Run: docker container inspect force-systemd-flag-987043 --format={{.State.Status}}
	I1002 21:39:36.871298 1161551 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:39:36.871316 1161551 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-987043 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:39:36.938692 1161551 cli_runner.go:164] Run: docker container inspect force-systemd-flag-987043 --format={{.State.Status}}
	I1002 21:39:36.959932 1161551 machine.go:93] provisionDockerMachine start ...
	I1002 21:39:36.960034 1161551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-987043
	I1002 21:39:36.983240 1161551 main.go:141] libmachine: Using SSH client type: native
	I1002 21:39:36.983589 1161551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34161 <nil> <nil>}
	I1002 21:39:36.983605 1161551 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:39:36.984225 1161551 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 21:39:40.133783 1161551 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-987043
	
	I1002 21:39:40.133812 1161551 ubuntu.go:182] provisioning hostname "force-systemd-flag-987043"
	I1002 21:39:40.133881 1161551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-987043
	I1002 21:39:40.151548 1161551 main.go:141] libmachine: Using SSH client type: native
	I1002 21:39:40.151872 1161551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34161 <nil> <nil>}
	I1002 21:39:40.151889 1161551 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-987043 && echo "force-systemd-flag-987043" | sudo tee /etc/hostname
	I1002 21:39:40.294890 1161551 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-987043
	
	I1002 21:39:40.294980 1161551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-987043
	I1002 21:39:40.312226 1161551 main.go:141] libmachine: Using SSH client type: native
	I1002 21:39:40.312535 1161551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34161 <nil> <nil>}
	I1002 21:39:40.312557 1161551 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-987043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-987043/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-987043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:39:40.442788 1161551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:39:40.442812 1161551 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:39:40.442839 1161551 ubuntu.go:190] setting up certificates
	I1002 21:39:40.442848 1161551 provision.go:84] configureAuth start
	I1002 21:39:40.442907 1161551 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-987043
	I1002 21:39:40.461283 1161551 provision.go:143] copyHostCerts
	I1002 21:39:40.461324 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:39:40.461359 1161551 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:39:40.461371 1161551 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:39:40.461446 1161551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:39:40.461548 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:39:40.461565 1161551 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:39:40.461570 1161551 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:39:40.461595 1161551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:39:40.461633 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:39:40.461648 1161551 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:39:40.461652 1161551 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:39:40.461677 1161551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:39:40.461720 1161551 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-987043 san=[127.0.0.1 192.168.85.2 force-systemd-flag-987043 localhost minikube]
	I1002 21:39:40.847687 1161551 provision.go:177] copyRemoteCerts
	I1002 21:39:40.847784 1161551 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:39:40.847864 1161551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-987043
	I1002 21:39:40.869696 1161551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34161 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-flag-987043/id_rsa Username:docker}
	I1002 21:39:40.973676 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:39:40.973804 1161551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:39:40.994684 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:39:40.994746 1161551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 21:39:41.028414 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:39:41.028484 1161551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:39:41.050573 1161551 provision.go:87] duration metric: took 607.702338ms to configureAuth
	I1002 21:39:41.050600 1161551 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:39:41.050786 1161551 config.go:182] Loaded profile config "force-systemd-flag-987043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:39:41.050889 1161551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-987043
	I1002 21:39:41.075547 1161551 main.go:141] libmachine: Using SSH client type: native
	I1002 21:39:41.075860 1161551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34161 <nil> <nil>}
	I1002 21:39:41.075877 1161551 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:39:41.354655 1161551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:39:41.354741 1161551 machine.go:96] duration metric: took 4.394787308s to provisionDockerMachine
	I1002 21:39:41.354775 1161551 client.go:171] duration metric: took 11.562123666s to LocalClient.Create
	I1002 21:39:41.354824 1161551 start.go:168] duration metric: took 11.562225661s to libmachine.API.Create "force-systemd-flag-987043"
	I1002 21:39:41.354850 1161551 start.go:294] postStartSetup for "force-systemd-flag-987043" (driver="docker")
	I1002 21:39:41.354891 1161551 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:39:41.354995 1161551 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:39:41.355076 1161551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-987043
	I1002 21:39:41.383638 1161551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34161 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-flag-987043/id_rsa Username:docker}
	I1002 21:39:41.481934 1161551 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:39:41.485169 1161551 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:39:41.485200 1161551 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:39:41.485212 1161551 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:39:41.485268 1161551 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:39:41.485364 1161551 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:39:41.485376 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> /etc/ssl/certs/9939542.pem
	I1002 21:39:41.485476 1161551 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:39:41.492847 1161551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:39:41.510905 1161551 start.go:297] duration metric: took 156.009875ms for postStartSetup
	I1002 21:39:41.511303 1161551 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-987043
	I1002 21:39:41.529803 1161551 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/config.json ...
	I1002 21:39:41.530129 1161551 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:39:41.530172 1161551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-987043
	I1002 21:39:41.547473 1161551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34161 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-flag-987043/id_rsa Username:docker}
	I1002 21:39:41.638954 1161551 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:39:41.643627 1161551 start.go:129] duration metric: took 11.85450419s to createHost
	I1002 21:39:41.643656 1161551 start.go:84] releasing machines lock for "force-systemd-flag-987043", held for 11.854630808s
	I1002 21:39:41.643727 1161551 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-987043
	I1002 21:39:41.660217 1161551 ssh_runner.go:195] Run: cat /version.json
	I1002 21:39:41.660269 1161551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-987043
	I1002 21:39:41.660280 1161551 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:39:41.660335 1161551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-987043
	I1002 21:39:41.681040 1161551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34161 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-flag-987043/id_rsa Username:docker}
	I1002 21:39:41.681965 1161551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34161 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-flag-987043/id_rsa Username:docker}
	I1002 21:39:41.773540 1161551 ssh_runner.go:195] Run: systemctl --version
	I1002 21:39:41.865513 1161551 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:39:41.903770 1161551 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:39:41.907967 1161551 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:39:41.908034 1161551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:39:41.936941 1161551 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 21:39:41.936965 1161551 start.go:496] detecting cgroup driver to use...
	I1002 21:39:41.936978 1161551 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1002 21:39:41.937032 1161551 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:39:41.953684 1161551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:39:41.966430 1161551 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:39:41.966545 1161551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:39:41.984980 1161551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:39:42.011558 1161551 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:39:42.152268 1161551 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:39:42.312115 1161551 docker.go:234] disabling docker service ...
	I1002 21:39:42.312281 1161551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:39:42.339542 1161551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:39:42.355266 1161551 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:39:42.486742 1161551 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:39:42.610464 1161551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:39:42.624079 1161551 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:39:42.637946 1161551 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:39:42.638017 1161551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:42.647040 1161551 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:39:42.647161 1161551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:42.655719 1161551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:42.664403 1161551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:42.672708 1161551 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:39:42.680638 1161551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:42.689120 1161551 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:42.701825 1161551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:42.710823 1161551 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:39:42.717882 1161551 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:39:42.725211 1161551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:39:42.848112 1161551 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:39:42.973550 1161551 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:39:42.973618 1161551 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:39:42.977498 1161551 start.go:564] Will wait 60s for crictl version
	I1002 21:39:42.977586 1161551 ssh_runner.go:195] Run: which crictl
	I1002 21:39:42.981182 1161551 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:39:43.009889 1161551 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:39:43.010070 1161551 ssh_runner.go:195] Run: crio --version
	I1002 21:39:43.038851 1161551 ssh_runner.go:195] Run: crio --version
	I1002 21:39:43.071596 1161551 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:39:43.074415 1161551 cli_runner.go:164] Run: docker network inspect force-systemd-flag-987043 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:39:43.091009 1161551 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 21:39:43.094779 1161551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:39:43.104703 1161551 kubeadm.go:883] updating cluster {Name:force-systemd-flag-987043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-987043 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:39:43.104820 1161551 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:39:43.104873 1161551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:39:43.137388 1161551 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:39:43.137409 1161551 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:39:43.137462 1161551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:39:43.162638 1161551 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:39:43.162660 1161551 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:39:43.162668 1161551 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 21:39:43.162760 1161551 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-987043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-987043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:39:43.162842 1161551 ssh_runner.go:195] Run: crio config
	I1002 21:39:43.236342 1161551 cni.go:84] Creating CNI manager for ""
	I1002 21:39:43.236417 1161551 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:39:43.236450 1161551 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:39:43.236503 1161551 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-987043 NodeName:force-systemd-flag-987043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:39:43.236685 1161551 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-987043"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:39:43.236805 1161551 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:39:43.244622 1161551 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:39:43.244739 1161551 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:39:43.252483 1161551 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1002 21:39:43.265510 1161551 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:39:43.277841 1161551 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1002 21:39:43.290526 1161551 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:39:43.293888 1161551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:39:43.303526 1161551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:39:43.421647 1161551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:39:43.437130 1161551 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043 for IP: 192.168.85.2
	I1002 21:39:43.437195 1161551 certs.go:195] generating shared ca certs ...
	I1002 21:39:43.437226 1161551 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:39:43.437397 1161551 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:39:43.437477 1161551 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:39:43.437515 1161551 certs.go:257] generating profile certs ...
	I1002 21:39:43.437598 1161551 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/client.key
	I1002 21:39:43.437637 1161551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/client.crt with IP's: []
	I1002 21:39:44.690108 1161551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/client.crt ...
	I1002 21:39:44.690146 1161551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/client.crt: {Name:mk430c75e43fdddfbbecb22928181b2bdf888aed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:39:44.690344 1161551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/client.key ...
	I1002 21:39:44.690359 1161551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/client.key: {Name:mk98d71726d00460efb1b9a4ea57838e8e51e81b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:39:44.690455 1161551 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/apiserver.key.2ee9faf2
	I1002 21:39:44.690473 1161551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/apiserver.crt.2ee9faf2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 21:39:45.112137 1161551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/apiserver.crt.2ee9faf2 ...
	I1002 21:39:45.112682 1161551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/apiserver.crt.2ee9faf2: {Name:mkf8c22453d4428465abb0ee0dd3759e188f915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:39:45.112981 1161551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/apiserver.key.2ee9faf2 ...
	I1002 21:39:45.124898 1161551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/apiserver.key.2ee9faf2: {Name:mkab9ae13137eb8384f0fa27db45230458229257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:39:45.125067 1161551 certs.go:382] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/apiserver.crt.2ee9faf2 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/apiserver.crt
	I1002 21:39:45.125164 1161551 certs.go:386] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/apiserver.key.2ee9faf2 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/apiserver.key
	I1002 21:39:45.126773 1161551 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/proxy-client.key
	I1002 21:39:45.127031 1161551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/proxy-client.crt with IP's: []
	I1002 21:39:45.920392 1161551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/proxy-client.crt ...
	I1002 21:39:45.920424 1161551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/proxy-client.crt: {Name:mkc800672d55b10d0e30a596a397693eb3f52a20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:39:45.920621 1161551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/proxy-client.key ...
	I1002 21:39:45.920635 1161551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/proxy-client.key: {Name:mk27e1cd541775ffef23b7ff9c88b276d62dd4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:39:45.920727 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:39:45.920748 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:39:45.920761 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:39:45.920776 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:39:45.920790 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:39:45.920802 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:39:45.920818 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:39:45.920831 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:39:45.920886 1161551 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:39:45.920924 1161551 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:39:45.920936 1161551 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:39:45.920960 1161551 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:39:45.920988 1161551 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:39:45.921014 1161551 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:39:45.921059 1161551 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:39:45.921090 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:39:45.921105 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem -> /usr/share/ca-certificates/993954.pem
	I1002 21:39:45.921121 1161551 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> /usr/share/ca-certificates/9939542.pem
	I1002 21:39:45.921632 1161551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:39:45.940867 1161551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:39:45.958393 1161551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:39:45.976007 1161551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:39:45.993188 1161551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1002 21:39:46.014591 1161551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:39:46.033208 1161551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:39:46.051412 1161551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-flag-987043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:39:46.069698 1161551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:39:46.088075 1161551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:39:46.106831 1161551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:39:46.124751 1161551 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:39:46.139959 1161551 ssh_runner.go:195] Run: openssl version
	I1002 21:39:46.146653 1161551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:39:46.154996 1161551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:39:46.158669 1161551 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:39:46.158747 1161551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:39:46.199663 1161551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:39:46.208141 1161551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:39:46.216384 1161551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:39:46.220150 1161551 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:39:46.220216 1161551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:39:46.261758 1161551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:39:46.270112 1161551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:39:46.278491 1161551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:39:46.282261 1161551 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:39:46.282328 1161551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:39:46.323066 1161551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:39:46.331539 1161551 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:39:46.334955 1161551 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:39:46.335008 1161551 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-987043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-987043 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:39:46.335080 1161551 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:39:46.335148 1161551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:39:46.363501 1161551 cri.go:89] found id: ""
	I1002 21:39:46.363628 1161551 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:39:46.371312 1161551 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:39:46.379187 1161551 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:39:46.379253 1161551 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:39:46.386967 1161551 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:39:46.386988 1161551 kubeadm.go:157] found existing configuration files:
	
	I1002 21:39:46.387061 1161551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:39:46.394703 1161551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:39:46.394810 1161551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:39:46.401785 1161551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:39:46.409453 1161551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:39:46.409549 1161551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:39:46.417167 1161551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:39:46.424493 1161551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:39:46.424557 1161551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:39:46.432021 1161551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:39:46.439405 1161551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:39:46.439499 1161551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:39:46.447128 1161551 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:39:46.497675 1161551 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:39:46.498314 1161551 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:39:46.528333 1161551 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:39:46.528485 1161551 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:39:46.528563 1161551 kubeadm.go:318] OS: Linux
	I1002 21:39:46.528642 1161551 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:39:46.528737 1161551 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:39:46.528817 1161551 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:39:46.528902 1161551 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:39:46.529016 1161551 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:39:46.529105 1161551 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:39:46.529182 1161551 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:39:46.529269 1161551 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:39:46.529354 1161551 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:39:46.598342 1161551 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:39:46.598516 1161551 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:39:46.598648 1161551 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:39:46.605773 1161551 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:39:46.612186 1161551 out.go:252]   - Generating certificates and keys ...
	I1002 21:39:46.612350 1161551 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:39:46.612451 1161551 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:39:46.863449 1161551 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:39:47.435949 1161551 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:39:47.611644 1161551 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:39:47.966757 1161551 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:39:48.373250 1161551 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:39:48.373426 1161551 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-987043 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 21:39:49.390443 1161551 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:39:49.390602 1161551 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-987043 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 21:39:50.181521 1161551 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:39:50.268501 1161551 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:39:50.552799 1161551 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:39:50.553357 1161551 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:39:52.033315 1161551 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:39:52.148765 1161551 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:39:52.463415 1161551 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:39:53.250383 1161551 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:39:53.709940 1161551 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:39:53.710782 1161551 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:39:53.713416 1161551 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:39:53.717005 1161551 out.go:252]   - Booting up control plane ...
	I1002 21:39:53.717111 1161551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:39:53.717195 1161551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:39:53.717270 1161551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:39:53.735128 1161551 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:39:53.735472 1161551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:39:53.743154 1161551 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:39:53.743529 1161551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:39:53.743784 1161551 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:39:53.913577 1161551 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:39:53.913705 1161551 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:39:56.418419 1161551 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.50181113s
	I1002 21:39:56.418545 1161551 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:39:56.418863 1161551 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 21:39:56.418966 1161551 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:39:56.419049 1161551 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:43:56.419461 1161551 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000197879s
	I1002 21:43:56.419797 1161551 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000018052s
	I1002 21:43:56.420082 1161551 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000566362s
	I1002 21:43:56.420093 1161551 kubeadm.go:318] 
	I1002 21:43:56.420189 1161551 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:43:56.420276 1161551 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:43:56.420381 1161551 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:43:56.420481 1161551 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:43:56.420559 1161551 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:43:56.420641 1161551 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:43:56.420646 1161551 kubeadm.go:318] 
	I1002 21:43:56.424227 1161551 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:43:56.424484 1161551 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:43:56.424596 1161551 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:43:56.425167 1161551 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:43:56.425241 1161551 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:43:56.425364 1161551 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-987043 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-987043 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.50181113s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000197879s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000018052s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000566362s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-987043 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-987043 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.50181113s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000197879s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000018052s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000566362s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:43:56.425437 1161551 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:43:56.983348 1161551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:43:56.996647 1161551 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:43:56.996711 1161551 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:43:57.006163 1161551 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:43:57.006186 1161551 kubeadm.go:157] found existing configuration files:
	
	I1002 21:43:57.006244 1161551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:43:57.014881 1161551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:43:57.014952 1161551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:43:57.022806 1161551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:43:57.030857 1161551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:43:57.030929 1161551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:43:57.038472 1161551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:43:57.046162 1161551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:43:57.046231 1161551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:43:57.054265 1161551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:43:57.062024 1161551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:43:57.062110 1161551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:43:57.069817 1161551 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:43:57.109163 1161551 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:43:57.109461 1161551 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:43:57.133615 1161551 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:43:57.133690 1161551 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:43:57.133732 1161551 kubeadm.go:318] OS: Linux
	I1002 21:43:57.133782 1161551 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:43:57.133835 1161551 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:43:57.133886 1161551 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:43:57.133944 1161551 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:43:57.133995 1161551 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:43:57.134068 1161551 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:43:57.134120 1161551 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:43:57.134172 1161551 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:43:57.134223 1161551 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:43:57.203943 1161551 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:43:57.204098 1161551 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:43:57.204225 1161551 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:43:57.211538 1161551 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:43:57.219257 1161551 out.go:252]   - Generating certificates and keys ...
	I1002 21:43:57.219380 1161551 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:43:57.219462 1161551 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:43:57.219576 1161551 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:43:57.219647 1161551 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:43:57.219718 1161551 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:43:57.219771 1161551 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:43:57.219835 1161551 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:43:57.219897 1161551 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:43:57.219971 1161551 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:43:57.220044 1161551 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:43:57.220081 1161551 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:43:57.220137 1161551 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:43:57.516842 1161551 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:43:58.290402 1161551 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:43:58.411203 1161551 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:43:58.801439 1161551 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:43:59.560066 1161551 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:43:59.560572 1161551 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:43:59.563057 1161551 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:43:59.566537 1161551 out.go:252]   - Booting up control plane ...
	I1002 21:43:59.566649 1161551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:43:59.566737 1161551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:43:59.566810 1161551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:43:59.582471 1161551 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:43:59.583005 1161551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:43:59.595931 1161551 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:43:59.596203 1161551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:43:59.596258 1161551 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:43:59.732110 1161551 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:43:59.732251 1161551 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:44:01.231359 1161551 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500826177s
	I1002 21:44:01.234995 1161551 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:44:01.235094 1161551 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 21:44:01.235407 1161551 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:44:01.235507 1161551 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:48:01.236213 1161551 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001023852s
	I1002 21:48:01.236336 1161551 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001238336s
	I1002 21:48:01.236728 1161551 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001435049s
	I1002 21:48:01.236752 1161551 kubeadm.go:318] 
	I1002 21:48:01.236848 1161551 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:48:01.236937 1161551 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:48:01.237035 1161551 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:48:01.237140 1161551 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:48:01.237222 1161551 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:48:01.237318 1161551 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:48:01.237339 1161551 kubeadm.go:318] 
	I1002 21:48:01.242753 1161551 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:48:01.243007 1161551 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:48:01.243123 1161551 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:48:01.243745 1161551 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 21:48:01.243822 1161551 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:48:01.243882 1161551 kubeadm.go:402] duration metric: took 8m14.90887867s to StartCluster
	I1002 21:48:01.243935 1161551 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:48:01.244007 1161551 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:48:01.272997 1161551 cri.go:89] found id: ""
	I1002 21:48:01.273032 1161551 logs.go:282] 0 containers: []
	W1002 21:48:01.273041 1161551 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:48:01.273048 1161551 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:48:01.273113 1161551 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:48:01.298137 1161551 cri.go:89] found id: ""
	I1002 21:48:01.298219 1161551 logs.go:282] 0 containers: []
	W1002 21:48:01.298234 1161551 logs.go:284] No container was found matching "etcd"
	I1002 21:48:01.298242 1161551 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:48:01.298302 1161551 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:48:01.324226 1161551 cri.go:89] found id: ""
	I1002 21:48:01.324249 1161551 logs.go:282] 0 containers: []
	W1002 21:48:01.324257 1161551 logs.go:284] No container was found matching "coredns"
	I1002 21:48:01.324263 1161551 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:48:01.324324 1161551 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:48:01.351538 1161551 cri.go:89] found id: ""
	I1002 21:48:01.351571 1161551 logs.go:282] 0 containers: []
	W1002 21:48:01.351584 1161551 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:48:01.351592 1161551 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:48:01.351691 1161551 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:48:01.376837 1161551 cri.go:89] found id: ""
	I1002 21:48:01.376860 1161551 logs.go:282] 0 containers: []
	W1002 21:48:01.376868 1161551 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:48:01.376874 1161551 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:48:01.376941 1161551 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:48:01.406509 1161551 cri.go:89] found id: ""
	I1002 21:48:01.406545 1161551 logs.go:282] 0 containers: []
	W1002 21:48:01.406572 1161551 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:48:01.406586 1161551 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:48:01.406665 1161551 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:48:01.434225 1161551 cri.go:89] found id: ""
	I1002 21:48:01.434252 1161551 logs.go:282] 0 containers: []
	W1002 21:48:01.434262 1161551 logs.go:284] No container was found matching "kindnet"
	I1002 21:48:01.434271 1161551 logs.go:123] Gathering logs for container status ...
	I1002 21:48:01.434284 1161551 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:48:01.464635 1161551 logs.go:123] Gathering logs for kubelet ...
	I1002 21:48:01.464661 1161551 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:48:01.554789 1161551 logs.go:123] Gathering logs for dmesg ...
	I1002 21:48:01.554824 1161551 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 21:48:01.571494 1161551 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:48:01.571523 1161551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:48:01.645423 1161551 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:48:01.635972    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.637043    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.637808    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.639366    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.639756    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:48:01.635972    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.637043    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.637808    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.639366    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.639756    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:48:01.645446 1161551 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:48:01.645459 1161551 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1002 21:48:01.724685 1161551 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500826177s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001023852s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001238336s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001435049s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:48:01.724758 1161551 out.go:285] * 
	* 
	W1002 21:48:01.724823 1161551 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500826177s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001023852s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001238336s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001435049s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500826177s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001023852s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001238336s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001435049s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:48:01.724840 1161551 out.go:285] * 
	* 
	W1002 21:48:01.730889 1161551 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:48:01.737253 1161551 out.go:203] 
	W1002 21:48:01.740104 1161551 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500826177s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001023852s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001238336s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001435049s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500826177s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001023852s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001238336s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001435049s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:48:01.740176 1161551 out.go:285] * 
	* 
	I1002 21:48:01.743345 1161551 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-987043 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-987043 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-10-02 21:48:02.14471564 +0000 UTC m=+5398.244652187
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-flag-987043
helpers_test.go:243: (dbg) docker inspect force-systemd-flag-987043:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "207c430d36e114ce22359f9d516402caf27007c870b8359429123543c224f972",
	        "Created": "2025-10-02T21:39:36.132377493Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1162203,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:39:36.197057665Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/207c430d36e114ce22359f9d516402caf27007c870b8359429123543c224f972/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/207c430d36e114ce22359f9d516402caf27007c870b8359429123543c224f972/hostname",
	        "HostsPath": "/var/lib/docker/containers/207c430d36e114ce22359f9d516402caf27007c870b8359429123543c224f972/hosts",
	        "LogPath": "/var/lib/docker/containers/207c430d36e114ce22359f9d516402caf27007c870b8359429123543c224f972/207c430d36e114ce22359f9d516402caf27007c870b8359429123543c224f972-json.log",
	        "Name": "/force-systemd-flag-987043",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-987043:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-987043",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "207c430d36e114ce22359f9d516402caf27007c870b8359429123543c224f972",
	                "LowerDir": "/var/lib/docker/overlay2/3676b259b576e699c7a9ff64ffe63f411011ddb49d47e5288a3a7249250576a2-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3676b259b576e699c7a9ff64ffe63f411011ddb49d47e5288a3a7249250576a2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3676b259b576e699c7a9ff64ffe63f411011ddb49d47e5288a3a7249250576a2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3676b259b576e699c7a9ff64ffe63f411011ddb49d47e5288a3a7249250576a2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-987043",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-987043/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-987043",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-987043",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-987043",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f3f8452efca2cbb1b4df5af13cb63dcc2e135c7d6f8e574843524f48f29c871b",
	            "SandboxKey": "/var/run/docker/netns/f3f8452efca2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34161"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34162"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34165"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34163"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34164"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-987043": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:70:5c:2d:fe:23",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b6ec40d937b96de2fdc031eefbe0fb80c62a6a33bd8db7fe6cd44c4b21c49c3",
	                    "EndpointID": "6355efff6f5d197a0ebd0e05a57ddedb35310b2b234ed33cfe3fadc09a531822",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-987043",
	                        "207c430d36e1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-987043 -n force-systemd-flag-987043
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-987043 -n force-systemd-flag-987043: exit status 6 (318.844981ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:48:02.478932 1171613 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-987043" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-987043 logs -n 25
helpers_test.go:260: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-644857 sudo systemctl cat kubelet --no-pager                                                     │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl status docker --all --full --no-pager                                      │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl cat docker --no-pager                                                      │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /etc/docker/daemon.json                                                          │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo docker system info                                                                   │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cri-dockerd --version                                                                │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl cat containerd --no-pager                                                  │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /etc/containerd/config.toml                                                      │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo containerd config dump                                                               │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl status crio --all --full --no-pager                                        │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl cat crio --no-pager                                                        │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo crio config                                                                          │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ delete  │ -p cilium-644857                                                                                           │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │ 02 Oct 25 21:41 UTC │
	│ start   │ -p force-systemd-env-916563 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-916563  │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ force-systemd-flag-987043 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-987043 │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:41:13
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:41:13.526118 1167760 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:41:13.526368 1167760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:41:13.526404 1167760 out.go:374] Setting ErrFile to fd 2...
	I1002 21:41:13.526424 1167760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:41:13.526793 1167760 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:41:13.527299 1167760 out.go:368] Setting JSON to false
	I1002 21:41:13.528271 1167760 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23011,"bootTime":1759418263,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:41:13.528394 1167760 start.go:140] virtualization:  
	I1002 21:41:13.531822 1167760 out.go:179] * [force-systemd-env-916563] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:41:13.535594 1167760 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:41:13.535718 1167760 notify.go:221] Checking for updates...
	I1002 21:41:13.541528 1167760 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:41:13.544342 1167760 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:41:13.547163 1167760 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:41:13.550103 1167760 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:41:13.552919 1167760 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1002 21:41:13.556331 1167760 config.go:182] Loaded profile config "force-systemd-flag-987043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:41:13.556447 1167760 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:41:13.581246 1167760 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:41:13.581368 1167760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:41:13.640325 1167760 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:41:13.631144649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:41:13.640465 1167760 docker.go:319] overlay module found
	I1002 21:41:13.643520 1167760 out.go:179] * Using the docker driver based on user configuration
	I1002 21:41:13.646294 1167760 start.go:306] selected driver: docker
	I1002 21:41:13.646311 1167760 start.go:936] validating driver "docker" against <nil>
	I1002 21:41:13.646323 1167760 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:41:13.647052 1167760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:41:13.699297 1167760 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:41:13.690250224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:41:13.699447 1167760 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:41:13.699677 1167760 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 21:41:13.702543 1167760 out.go:179] * Using Docker driver with root privileges
	I1002 21:41:13.705276 1167760 cni.go:84] Creating CNI manager for ""
	I1002 21:41:13.705354 1167760 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:41:13.705371 1167760 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:41:13.705442 1167760 start.go:350] cluster config:
	{Name:force-systemd-env-916563 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-916563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:41:13.708462 1167760 out.go:179] * Starting "force-systemd-env-916563" primary control-plane node in "force-systemd-env-916563" cluster
	I1002 21:41:13.711271 1167760 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:41:13.714102 1167760 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:41:13.716850 1167760 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:41:13.716907 1167760 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:41:13.716920 1167760 cache.go:59] Caching tarball of preloaded images
	I1002 21:41:13.716951 1167760 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:41:13.717005 1167760 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:41:13.717016 1167760 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:41:13.717131 1167760 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/config.json ...
	I1002 21:41:13.717149 1167760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/config.json: {Name:mk42f6fcb04d33c6273bdcf1dbad80753d27d2ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:13.739826 1167760 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:41:13.739850 1167760 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:41:13.739867 1167760 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:41:13.739890 1167760 start.go:361] acquireMachinesLock for force-systemd-env-916563: {Name:mk6e2386a359293ea9595f3ba293d6807d5cc6e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:41:13.740000 1167760 start.go:365] duration metric: took 92.74µs to acquireMachinesLock for "force-systemd-env-916563"
	I1002 21:41:13.740028 1167760 start.go:94] Provisioning new machine with config: &{Name:force-systemd-env-916563 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-916563 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:41:13.740096 1167760 start.go:126] createHost starting for "" (driver="docker")
	I1002 21:41:13.743360 1167760 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:41:13.743582 1167760 start.go:160] libmachine.API.Create for "force-systemd-env-916563" (driver="docker")
	I1002 21:41:13.743612 1167760 client.go:168] LocalClient.Create starting
	I1002 21:41:13.743687 1167760 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem
	I1002 21:41:13.743721 1167760 main.go:141] libmachine: Decoding PEM data...
	I1002 21:41:13.743734 1167760 main.go:141] libmachine: Parsing certificate...
	I1002 21:41:13.743787 1167760 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem
	I1002 21:41:13.743805 1167760 main.go:141] libmachine: Decoding PEM data...
	I1002 21:41:13.743824 1167760 main.go:141] libmachine: Parsing certificate...
	I1002 21:41:13.744162 1167760 cli_runner.go:164] Run: docker network inspect force-systemd-env-916563 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:41:13.760111 1167760 cli_runner.go:211] docker network inspect force-systemd-env-916563 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:41:13.760187 1167760 network_create.go:284] running [docker network inspect force-systemd-env-916563] to gather additional debugging logs...
	I1002 21:41:13.760216 1167760 cli_runner.go:164] Run: docker network inspect force-systemd-env-916563
	W1002 21:41:13.775212 1167760 cli_runner.go:211] docker network inspect force-systemd-env-916563 returned with exit code 1
	I1002 21:41:13.775247 1167760 network_create.go:287] error running [docker network inspect force-systemd-env-916563]: docker network inspect force-systemd-env-916563: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-916563 not found
	I1002 21:41:13.775260 1167760 network_create.go:289] output of [docker network inspect force-systemd-env-916563]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-916563 not found
	
	** /stderr **
	I1002 21:41:13.775356 1167760 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:41:13.791682 1167760 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c06a83b4618b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:65:5b:88:54:7f} reservation:<nil>}
	I1002 21:41:13.792046 1167760 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-478a83a9ba8a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:87:10:56:a0:1c} reservation:<nil>}
	I1002 21:41:13.792289 1167760 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb579a1208f3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:a8:08:2c:81:8c} reservation:<nil>}
	I1002 21:41:13.792724 1167760 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a00100}
	I1002 21:41:13.792745 1167760 network_create.go:124] attempt to create docker network force-systemd-env-916563 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1002 21:41:13.792806 1167760 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-916563 force-systemd-env-916563
	I1002 21:41:13.857363 1167760 network_create.go:108] docker network force-systemd-env-916563 192.168.76.0/24 created
	I1002 21:41:13.857397 1167760 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-916563" container
	I1002 21:41:13.857470 1167760 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:41:13.873607 1167760 cli_runner.go:164] Run: docker volume create force-systemd-env-916563 --label name.minikube.sigs.k8s.io=force-systemd-env-916563 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:41:13.892250 1167760 oci.go:103] Successfully created a docker volume force-systemd-env-916563
	I1002 21:41:13.892341 1167760 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-916563-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-916563 --entrypoint /usr/bin/test -v force-systemd-env-916563:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:41:14.414389 1167760 oci.go:107] Successfully prepared a docker volume force-systemd-env-916563
	I1002 21:41:14.414450 1167760 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:41:14.414476 1167760 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:41:14.414572 1167760 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-916563:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:41:18.840991 1167760 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-916563:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.426377379s)
	I1002 21:41:18.841027 1167760 kic.go:203] duration metric: took 4.426547155s to extract preloaded images to volume ...
	W1002 21:41:18.841187 1167760 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:41:18.841298 1167760 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:41:18.895627 1167760 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-916563 --name force-systemd-env-916563 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-916563 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-916563 --network force-systemd-env-916563 --ip 192.168.76.2 --volume force-systemd-env-916563:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:41:19.220597 1167760 cli_runner.go:164] Run: docker container inspect force-systemd-env-916563 --format={{.State.Running}}
	I1002 21:41:19.247769 1167760 cli_runner.go:164] Run: docker container inspect force-systemd-env-916563 --format={{.State.Status}}
	I1002 21:41:19.266993 1167760 cli_runner.go:164] Run: docker exec force-systemd-env-916563 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:41:19.316339 1167760 oci.go:144] the created container "force-systemd-env-916563" has a running status.
	I1002 21:41:19.316375 1167760 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa...
	I1002 21:41:19.516947 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:41:19.517006 1167760 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:41:19.539135 1167760 cli_runner.go:164] Run: docker container inspect force-systemd-env-916563 --format={{.State.Status}}
	I1002 21:41:19.563745 1167760 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:41:19.563770 1167760 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-916563 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:41:19.635876 1167760 cli_runner.go:164] Run: docker container inspect force-systemd-env-916563 --format={{.State.Status}}
	I1002 21:41:19.662759 1167760 machine.go:93] provisionDockerMachine start ...
	I1002 21:41:19.662863 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:19.699911 1167760 main.go:141] libmachine: Using SSH client type: native
	I1002 21:41:19.700246 1167760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34166 <nil> <nil>}
	I1002 21:41:19.700261 1167760 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:41:19.700863 1167760 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59434->127.0.0.1:34166: read: connection reset by peer
	I1002 21:41:22.841703 1167760 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-916563
	
	I1002 21:41:22.841727 1167760 ubuntu.go:182] provisioning hostname "force-systemd-env-916563"
	I1002 21:41:22.841789 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:22.860626 1167760 main.go:141] libmachine: Using SSH client type: native
	I1002 21:41:22.860957 1167760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34166 <nil> <nil>}
	I1002 21:41:22.860970 1167760 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-916563 && echo "force-systemd-env-916563" | sudo tee /etc/hostname
	I1002 21:41:23.000199 1167760 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-916563
	
	I1002 21:41:23.000320 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:23.018533 1167760 main.go:141] libmachine: Using SSH client type: native
	I1002 21:41:23.018867 1167760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34166 <nil> <nil>}
	I1002 21:41:23.018891 1167760 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-916563' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-916563/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-916563' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:41:23.150161 1167760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:41:23.150191 1167760 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:41:23.150233 1167760 ubuntu.go:190] setting up certificates
	I1002 21:41:23.150245 1167760 provision.go:84] configureAuth start
	I1002 21:41:23.150307 1167760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-916563
	I1002 21:41:23.167203 1167760 provision.go:143] copyHostCerts
	I1002 21:41:23.167245 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:41:23.167277 1167760 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:41:23.167288 1167760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:41:23.167367 1167760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:41:23.167460 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:41:23.167481 1167760 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:41:23.167486 1167760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:41:23.167523 1167760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:41:23.167568 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:41:23.167588 1167760 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:41:23.167595 1167760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:41:23.167624 1167760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:41:23.167673 1167760 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-916563 san=[127.0.0.1 192.168.76.2 force-systemd-env-916563 localhost minikube]
	I1002 21:41:23.648409 1167760 provision.go:177] copyRemoteCerts
	I1002 21:41:23.648477 1167760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:41:23.648524 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:23.666817 1167760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34166 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa Username:docker}
	I1002 21:41:23.761548 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:41:23.761609 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:41:23.778848 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:41:23.778910 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 21:41:23.796599 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:41:23.796659 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:41:23.813574 1167760 provision.go:87] duration metric: took 663.306637ms to configureAuth
	I1002 21:41:23.813604 1167760 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:41:23.813777 1167760 config.go:182] Loaded profile config "force-systemd-env-916563": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:41:23.813887 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:23.832027 1167760 main.go:141] libmachine: Using SSH client type: native
	I1002 21:41:23.832334 1167760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34166 <nil> <nil>}
	I1002 21:41:23.832354 1167760 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:41:24.075649 1167760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:41:24.075670 1167760 machine.go:96] duration metric: took 4.412889984s to provisionDockerMachine
	I1002 21:41:24.075681 1167760 client.go:171] duration metric: took 10.332062658s to LocalClient.Create
	I1002 21:41:24.075694 1167760 start.go:168] duration metric: took 10.332113282s to libmachine.API.Create "force-systemd-env-916563"
	I1002 21:41:24.075710 1167760 start.go:294] postStartSetup for "force-systemd-env-916563" (driver="docker")
	I1002 21:41:24.075721 1167760 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:41:24.075789 1167760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:41:24.075857 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:24.094593 1167760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34166 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa Username:docker}
	I1002 21:41:24.190171 1167760 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:41:24.193414 1167760 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:41:24.193441 1167760 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:41:24.193453 1167760 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:41:24.193508 1167760 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:41:24.193606 1167760 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:41:24.193617 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> /etc/ssl/certs/9939542.pem
	I1002 21:41:24.193715 1167760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:41:24.201147 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:41:24.219312 1167760 start.go:297] duration metric: took 143.586031ms for postStartSetup
	I1002 21:41:24.219736 1167760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-916563
	I1002 21:41:24.236366 1167760 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/config.json ...
	I1002 21:41:24.236659 1167760 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:41:24.236713 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:24.252905 1167760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34166 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa Username:docker}
	I1002 21:41:24.347803 1167760 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:41:24.352771 1167760 start.go:129] duration metric: took 10.612659192s to createHost
	I1002 21:41:24.352795 1167760 start.go:84] releasing machines lock for "force-systemd-env-916563", held for 10.61278458s
	I1002 21:41:24.352866 1167760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-916563
	I1002 21:41:24.370728 1167760 ssh_runner.go:195] Run: cat /version.json
	I1002 21:41:24.370778 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:24.370787 1167760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:41:24.370894 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:24.397642 1167760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34166 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa Username:docker}
	I1002 21:41:24.398838 1167760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34166 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa Username:docker}
	I1002 21:41:24.493585 1167760 ssh_runner.go:195] Run: systemctl --version
	I1002 21:41:24.584941 1167760 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:41:24.623106 1167760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:41:24.627403 1167760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:41:24.627480 1167760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:41:24.655910 1167760 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 21:41:24.655933 1167760 start.go:496] detecting cgroup driver to use...
	I1002 21:41:24.655961 1167760 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1002 21:41:24.656097 1167760 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:41:24.673332 1167760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:41:24.685482 1167760 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:41:24.685546 1167760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:41:24.701404 1167760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:41:24.719814 1167760 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:41:24.832703 1167760 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:41:24.965911 1167760 docker.go:234] disabling docker service ...
	I1002 21:41:24.966027 1167760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:41:24.989018 1167760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:41:25.006405 1167760 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:41:25.126216 1167760 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:41:25.267392 1167760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:41:25.280528 1167760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:41:25.295989 1167760 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:41:25.296078 1167760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:41:25.304855 1167760 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:41:25.304928 1167760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:41:25.314194 1167760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:41:25.322929 1167760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:41:25.332130 1167760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:41:25.341430 1167760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:41:25.350060 1167760 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:41:25.363502 1167760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:41:25.372334 1167760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:41:25.380192 1167760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:41:25.387523 1167760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:41:25.503123 1167760 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:41:25.624407 1167760 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:41:25.624538 1167760 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:41:25.628403 1167760 start.go:564] Will wait 60s for crictl version
	I1002 21:41:25.628523 1167760 ssh_runner.go:195] Run: which crictl
	I1002 21:41:25.632063 1167760 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:41:25.656087 1167760 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:41:25.656231 1167760 ssh_runner.go:195] Run: crio --version
	I1002 21:41:25.685560 1167760 ssh_runner.go:195] Run: crio --version
	I1002 21:41:25.718977 1167760 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:41:25.721842 1167760 cli_runner.go:164] Run: docker network inspect force-systemd-env-916563 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:41:25.736455 1167760 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 21:41:25.740329 1167760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:41:25.750635 1167760 kubeadm.go:883] updating cluster {Name:force-systemd-env-916563 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-916563 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:41:25.750739 1167760 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:41:25.750813 1167760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:41:25.781699 1167760 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:41:25.781724 1167760 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:41:25.781778 1167760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:41:25.811194 1167760 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:41:25.811220 1167760 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:41:25.811229 1167760 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 21:41:25.811316 1167760 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-916563 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-916563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:41:25.811399 1167760 ssh_runner.go:195] Run: crio config
	I1002 21:41:25.869918 1167760 cni.go:84] Creating CNI manager for ""
	I1002 21:41:25.869986 1167760 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:41:25.870011 1167760 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:41:25.870055 1167760 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-916563 NodeName:force-systemd-env-916563 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:41:25.870187 1167760 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-916563"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:41:25.870261 1167760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:41:25.878016 1167760 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:41:25.878115 1167760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:41:25.885808 1167760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1002 21:41:25.899640 1167760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:41:25.912986 1167760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1002 21:41:25.926270 1167760 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:41:25.929814 1167760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:41:25.939570 1167760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:41:26.048724 1167760 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:41:26.067276 1167760 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563 for IP: 192.168.76.2
	I1002 21:41:26.067350 1167760 certs.go:195] generating shared ca certs ...
	I1002 21:41:26.067394 1167760 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:26.067613 1167760 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:41:26.067685 1167760 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:41:26.067717 1167760 certs.go:257] generating profile certs ...
	I1002 21:41:26.067802 1167760 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/client.key
	I1002 21:41:26.067864 1167760 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/client.crt with IP's: []
	I1002 21:41:26.472814 1167760 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/client.crt ...
	I1002 21:41:26.472848 1167760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/client.crt: {Name:mk28822afcf9bad2ac6c923a9b9bd4fd0c35fa0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:26.473074 1167760 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/client.key ...
	I1002 21:41:26.473093 1167760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/client.key: {Name:mk40d533e68d8c195fb943dc97a94c16464457d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:26.473193 1167760 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.key.d945a3da
	I1002 21:41:26.473211 1167760 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.crt.d945a3da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 21:41:26.743202 1167760 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.crt.d945a3da ...
	I1002 21:41:26.743236 1167760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.crt.d945a3da: {Name:mkee7a3f859bac4775b0e36f24ab5f2e4de6964c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:26.743421 1167760 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.key.d945a3da ...
	I1002 21:41:26.743431 1167760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.key.d945a3da: {Name:mk0b688a96928dec869a0fdf46894c4c4bdda420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:26.743503 1167760 certs.go:382] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.crt.d945a3da -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.crt
	I1002 21:41:26.743575 1167760 certs.go:386] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.key.d945a3da -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.key
	I1002 21:41:26.743639 1167760 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.key
	I1002 21:41:26.743657 1167760 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.crt with IP's: []
	I1002 21:41:27.611411 1167760 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.crt ...
	I1002 21:41:27.611444 1167760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.crt: {Name:mk8c0096430bb7219abb411c1130902c8db84c76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:27.612245 1167760 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.key ...
	I1002 21:41:27.612288 1167760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.key: {Name:mk42a118ebcffd268fa7380b82f2960394325c4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:27.612415 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:41:27.612592 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:41:27.612636 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:41:27.612755 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:41:27.612792 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:41:27.612822 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:41:27.612853 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:41:27.613006 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:41:27.613102 1167760 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:41:27.613170 1167760 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:41:27.613197 1167760 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:41:27.613249 1167760 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:41:27.613298 1167760 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:41:27.613500 1167760 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:41:27.613578 1167760 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:41:27.613636 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem -> /usr/share/ca-certificates/993954.pem
	I1002 21:41:27.613670 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> /usr/share/ca-certificates/9939542.pem
	I1002 21:41:27.613709 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:41:27.614351 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:41:27.639026 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:41:27.658347 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:41:27.676042 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:41:27.693937 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1002 21:41:27.712753 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:41:27.731105 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:41:27.750164 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:41:27.771461 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:41:27.789038 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:41:27.806804 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:41:27.824615 1167760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:41:27.837751 1167760 ssh_runner.go:195] Run: openssl version
	I1002 21:41:27.844946 1167760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:41:27.855279 1167760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:41:27.859593 1167760 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:41:27.859685 1167760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:41:27.900868 1167760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:41:27.909619 1167760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:41:27.918252 1167760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:41:27.922660 1167760 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:41:27.922766 1167760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:41:27.963922 1167760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:41:27.972186 1167760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:41:27.980557 1167760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:41:27.984370 1167760 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:41:27.984454 1167760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:41:28.025686 1167760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:41:28.034521 1167760 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:41:28.038421 1167760 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:41:28.038479 1167760 kubeadm.go:400] StartCluster: {Name:force-systemd-env-916563 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-916563 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:41:28.038568 1167760 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:41:28.038629 1167760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:41:28.065114 1167760 cri.go:89] found id: ""
	I1002 21:41:28.065213 1167760 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:41:28.073495 1167760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:41:28.081908 1167760 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:41:28.082014 1167760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:41:28.090366 1167760 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:41:28.090395 1167760 kubeadm.go:157] found existing configuration files:
	
	I1002 21:41:28.090449 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:41:28.099740 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:41:28.099821 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:41:28.108202 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:41:28.116025 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:41:28.116118 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:41:28.123603 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:41:28.131368 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:41:28.131452 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:41:28.138780 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:41:28.146425 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:41:28.146501 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:41:28.153931 1167760 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:41:28.193684 1167760 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:41:28.194025 1167760 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:41:28.244247 1167760 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:41:28.244344 1167760 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:41:28.244403 1167760 kubeadm.go:318] OS: Linux
	I1002 21:41:28.244471 1167760 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:41:28.244555 1167760 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:41:28.244620 1167760 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:41:28.244687 1167760 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:41:28.244762 1167760 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:41:28.244830 1167760 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:41:28.244892 1167760 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:41:28.244957 1167760 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:41:28.245022 1167760 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:41:28.336182 1167760 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:41:28.336334 1167760 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:41:28.336462 1167760 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:41:28.345250 1167760 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:41:28.352113 1167760 out.go:252]   - Generating certificates and keys ...
	I1002 21:41:28.352246 1167760 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:41:28.352335 1167760 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:41:28.798455 1167760 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:41:28.924958 1167760 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:41:29.244119 1167760 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:41:29.704228 1167760 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:41:30.136741 1167760 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:41:30.136919 1167760 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-916563 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 21:41:30.449578 1167760 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:41:30.449743 1167760 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-916563 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 21:41:31.433813 1167760 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:41:32.045986 1167760 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:41:32.285256 1167760 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:41:32.285552 1167760 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:41:32.664229 1167760 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:41:33.338585 1167760 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:41:34.011741 1167760 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:41:34.439290 1167760 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:41:34.950910 1167760 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:41:34.951737 1167760 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:41:34.954522 1167760 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:41:34.958198 1167760 out.go:252]   - Booting up control plane ...
	I1002 21:41:34.958301 1167760 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:41:34.958383 1167760 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:41:34.958454 1167760 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:41:34.973786 1167760 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:41:34.974181 1167760 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:41:34.982415 1167760 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:41:34.982719 1167760 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:41:34.982946 1167760 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:41:35.126635 1167760 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:41:35.126768 1167760 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:41:37.125213 1167760 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.00099988s
	I1002 21:41:37.128881 1167760 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:41:37.128980 1167760 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 21:41:37.129402 1167760 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:41:37.129492 1167760 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:43:56.419461 1161551 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000197879s
	I1002 21:43:56.419797 1161551 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000018052s
	I1002 21:43:56.420082 1161551 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000566362s
	I1002 21:43:56.420093 1161551 kubeadm.go:318] 
	I1002 21:43:56.420189 1161551 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:43:56.420276 1161551 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:43:56.420381 1161551 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:43:56.420481 1161551 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:43:56.420559 1161551 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:43:56.420641 1161551 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:43:56.420646 1161551 kubeadm.go:318] 
	I1002 21:43:56.424227 1161551 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:43:56.424484 1161551 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:43:56.424596 1161551 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:43:56.425167 1161551 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:43:56.425241 1161551 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:43:56.425364 1161551 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-987043 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-987043 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.50181113s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000197879s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000018052s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000566362s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:43:56.425437 1161551 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:43:56.983348 1161551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:43:56.996647 1161551 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:43:56.996711 1161551 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:43:57.006163 1161551 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:43:57.006186 1161551 kubeadm.go:157] found existing configuration files:
	
	I1002 21:43:57.006244 1161551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:43:57.014881 1161551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:43:57.014952 1161551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:43:57.022806 1161551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:43:57.030857 1161551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:43:57.030929 1161551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:43:57.038472 1161551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:43:57.046162 1161551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:43:57.046231 1161551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:43:57.054265 1161551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:43:57.062024 1161551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:43:57.062110 1161551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:43:57.069817 1161551 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:43:57.109163 1161551 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:43:57.109461 1161551 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:43:57.133615 1161551 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:43:57.133690 1161551 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:43:57.133732 1161551 kubeadm.go:318] OS: Linux
	I1002 21:43:57.133782 1161551 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:43:57.133835 1161551 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:43:57.133886 1161551 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:43:57.133944 1161551 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:43:57.133995 1161551 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:43:57.134068 1161551 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:43:57.134120 1161551 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:43:57.134172 1161551 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:43:57.134223 1161551 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:43:57.203943 1161551 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:43:57.204098 1161551 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:43:57.204225 1161551 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:43:57.211538 1161551 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:43:57.219257 1161551 out.go:252]   - Generating certificates and keys ...
	I1002 21:43:57.219380 1161551 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:43:57.219462 1161551 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:43:57.219576 1161551 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:43:57.219647 1161551 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:43:57.219718 1161551 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:43:57.219771 1161551 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:43:57.219835 1161551 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:43:57.219897 1161551 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:43:57.219971 1161551 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:43:57.220044 1161551 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:43:57.220081 1161551 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:43:57.220137 1161551 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:43:57.516842 1161551 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:43:58.290402 1161551 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:43:58.411203 1161551 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:43:58.801439 1161551 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:43:59.560066 1161551 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:43:59.560572 1161551 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:43:59.563057 1161551 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:43:59.566537 1161551 out.go:252]   - Booting up control plane ...
	I1002 21:43:59.566649 1161551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:43:59.566737 1161551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:43:59.566810 1161551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:43:59.582471 1161551 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:43:59.583005 1161551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:43:59.595931 1161551 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:43:59.596203 1161551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:43:59.596258 1161551 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:43:59.732110 1161551 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:43:59.732251 1161551 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:44:01.231359 1161551 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500826177s
	I1002 21:44:01.234995 1161551 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:44:01.235094 1161551 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 21:44:01.235407 1161551 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:44:01.235507 1161551 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:45:37.129380 1167760 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000136359s
	I1002 21:45:37.129502 1167760 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000107822s
	I1002 21:45:37.130411 1167760 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000411182s
	I1002 21:45:37.130434 1167760 kubeadm.go:318] 
	I1002 21:45:37.130530 1167760 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:45:37.130619 1167760 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:45:37.130715 1167760 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:45:37.130814 1167760 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:45:37.130892 1167760 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:45:37.130974 1167760 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:45:37.130979 1167760 kubeadm.go:318] 
	I1002 21:45:37.134501 1167760 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:45:37.134792 1167760 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:45:37.134916 1167760 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:45:37.135541 1167760 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:45:37.135617 1167760 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:45:37.135750 1167760 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-916563 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-916563 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.00099988s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136359s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000107822s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000411182s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:45:37.135836 1167760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:45:37.699135 1167760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:45:37.712262 1167760 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:45:37.712349 1167760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:45:37.720500 1167760 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:45:37.720519 1167760 kubeadm.go:157] found existing configuration files:
	
	I1002 21:45:37.720593 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:45:37.728334 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:45:37.728423 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:45:37.735753 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:45:37.743942 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:45:37.744022 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:45:37.751610 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:45:37.759054 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:45:37.759116 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:45:37.766332 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:45:37.774856 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:45:37.774947 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:45:37.782875 1167760 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:45:37.824782 1167760 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:45:37.824843 1167760 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:45:37.850631 1167760 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:45:37.850712 1167760 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:45:37.850752 1167760 kubeadm.go:318] OS: Linux
	I1002 21:45:37.850804 1167760 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:45:37.850858 1167760 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:45:37.850911 1167760 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:45:37.850966 1167760 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:45:37.851024 1167760 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:45:37.851084 1167760 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:45:37.851136 1167760 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:45:37.851190 1167760 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:45:37.851243 1167760 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:45:37.920925 1167760 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:45:37.921059 1167760 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:45:37.921184 1167760 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:45:37.928860 1167760 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:45:37.936006 1167760 out.go:252]   - Generating certificates and keys ...
	I1002 21:45:37.936107 1167760 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:45:37.936178 1167760 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:45:37.936307 1167760 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:45:37.936434 1167760 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:45:37.936514 1167760 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:45:37.936574 1167760 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:45:37.936656 1167760 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:45:37.936760 1167760 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:45:37.936893 1167760 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:45:37.936989 1167760 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:45:37.937045 1167760 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:45:37.937118 1167760 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:45:39.356212 1167760 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:45:39.970369 1167760 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:45:40.707350 1167760 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:45:40.843823 1167760 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:45:41.587742 1167760 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:45:41.588613 1167760 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:45:41.591474 1167760 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:45:41.594686 1167760 out.go:252]   - Booting up control plane ...
	I1002 21:45:41.594798 1167760 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:45:41.594895 1167760 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:45:41.596974 1167760 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:45:41.612529 1167760 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:45:41.612649 1167760 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:45:41.621829 1167760 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:45:41.622258 1167760 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:45:41.622559 1167760 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:45:41.764129 1167760 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:45:41.764269 1167760 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:45:43.265873 1167760 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501822128s
	I1002 21:45:43.269408 1167760 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:45:43.269512 1167760 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 21:45:43.269640 1167760 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:45:43.269729 1167760 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:48:01.236213 1161551 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001023852s
	I1002 21:48:01.236336 1161551 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001238336s
	I1002 21:48:01.236728 1161551 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001435049s
	I1002 21:48:01.236752 1161551 kubeadm.go:318] 
	I1002 21:48:01.236848 1161551 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:48:01.236937 1161551 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:48:01.237035 1161551 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:48:01.237140 1161551 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:48:01.237222 1161551 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:48:01.237318 1161551 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:48:01.237339 1161551 kubeadm.go:318] 
	I1002 21:48:01.242753 1161551 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:48:01.243007 1161551 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:48:01.243123 1161551 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:48:01.243745 1161551 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 21:48:01.243822 1161551 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:48:01.243882 1161551 kubeadm.go:402] duration metric: took 8m14.90887867s to StartCluster
	I1002 21:48:01.243935 1161551 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:48:01.244007 1161551 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:48:01.272997 1161551 cri.go:89] found id: ""
	I1002 21:48:01.273032 1161551 logs.go:282] 0 containers: []
	W1002 21:48:01.273041 1161551 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:48:01.273048 1161551 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:48:01.273113 1161551 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:48:01.298137 1161551 cri.go:89] found id: ""
	I1002 21:48:01.298219 1161551 logs.go:282] 0 containers: []
	W1002 21:48:01.298234 1161551 logs.go:284] No container was found matching "etcd"
	I1002 21:48:01.298242 1161551 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:48:01.298302 1161551 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:48:01.324226 1161551 cri.go:89] found id: ""
	I1002 21:48:01.324249 1161551 logs.go:282] 0 containers: []
	W1002 21:48:01.324257 1161551 logs.go:284] No container was found matching "coredns"
	I1002 21:48:01.324263 1161551 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:48:01.324324 1161551 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:48:01.351538 1161551 cri.go:89] found id: ""
	I1002 21:48:01.351571 1161551 logs.go:282] 0 containers: []
	W1002 21:48:01.351584 1161551 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:48:01.351592 1161551 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:48:01.351691 1161551 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:48:01.376837 1161551 cri.go:89] found id: ""
	I1002 21:48:01.376860 1161551 logs.go:282] 0 containers: []
	W1002 21:48:01.376868 1161551 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:48:01.376874 1161551 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:48:01.376941 1161551 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:48:01.406509 1161551 cri.go:89] found id: ""
	I1002 21:48:01.406545 1161551 logs.go:282] 0 containers: []
	W1002 21:48:01.406572 1161551 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:48:01.406586 1161551 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:48:01.406665 1161551 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:48:01.434225 1161551 cri.go:89] found id: ""
	I1002 21:48:01.434252 1161551 logs.go:282] 0 containers: []
	W1002 21:48:01.434262 1161551 logs.go:284] No container was found matching "kindnet"
	I1002 21:48:01.434271 1161551 logs.go:123] Gathering logs for container status ...
	I1002 21:48:01.434284 1161551 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:48:01.464635 1161551 logs.go:123] Gathering logs for kubelet ...
	I1002 21:48:01.464661 1161551 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:48:01.554789 1161551 logs.go:123] Gathering logs for dmesg ...
	I1002 21:48:01.554824 1161551 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 21:48:01.571494 1161551 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:48:01.571523 1161551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:48:01.645423 1161551 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:48:01.635972    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.637043    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.637808    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.639366    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.639756    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:48:01.635972    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.637043    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.637808    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.639366    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:01.639756    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:48:01.645446 1161551 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:48:01.645459 1161551 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1002 21:48:01.724685 1161551 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500826177s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001023852s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001238336s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001435049s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:48:01.724758 1161551 out.go:285] * 
	W1002 21:48:01.724823 1161551 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500826177s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001023852s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001238336s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001435049s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:48:01.724840 1161551 out.go:285] * 
	W1002 21:48:01.730889 1161551 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:48:01.737253 1161551 out.go:203] 
	W1002 21:48:01.740104 1161551 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500826177s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001023852s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001238336s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001435049s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:48:01.740176 1161551 out.go:285] * 
	I1002 21:48:01.743345 1161551 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:47:55 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:47:55.893189651Z" level=info msg="createCtr: removing container 66e61dacf34f73775b5a9a13febe93a9da72fe01320d6fbf6533ba078f3468b3" id=02324796-4130-4454-80fc-a2c2266c4370 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:47:55 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:47:55.893224784Z" level=info msg="createCtr: deleting container 66e61dacf34f73775b5a9a13febe93a9da72fe01320d6fbf6533ba078f3468b3 from storage" id=02324796-4130-4454-80fc-a2c2266c4370 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:47:55 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:47:55.896184763Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-flag-987043_kube-system_7a7641f62a81953a4bf322c321eb1ed8_0" id=02324796-4130-4454-80fc-a2c2266c4370 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:47:56 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:47:56.875980975Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=b5a4fb06-17f6-4ef5-b09b-2782ff0d4864 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:47:56 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:47:56.879643581Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=56f20b0c-4840-4764-878c-631a705a0f6a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:47:56 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:47:56.880614378Z" level=info msg="Creating container: kube-system/kube-scheduler-force-systemd-flag-987043/kube-scheduler" id=49447dba-f7a5-4560-8c5b-cec76a0cba33 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:47:56 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:47:56.88085228Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:47:56 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:47:56.885211598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:47:56 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:47:56.88582649Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:47:56 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:47:56.897290432Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=49447dba-f7a5-4560-8c5b-cec76a0cba33 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:47:56 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:47:56.898678046Z" level=info msg="createCtr: deleting container ID 198db10751a8302832a8a09aa2d0178cb439cdef31d62f13651d2e0a99248809 from idIndex" id=49447dba-f7a5-4560-8c5b-cec76a0cba33 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:47:56 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:47:56.898722738Z" level=info msg="createCtr: removing container 198db10751a8302832a8a09aa2d0178cb439cdef31d62f13651d2e0a99248809" id=49447dba-f7a5-4560-8c5b-cec76a0cba33 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:47:56 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:47:56.89876035Z" level=info msg="createCtr: deleting container 198db10751a8302832a8a09aa2d0178cb439cdef31d62f13651d2e0a99248809 from storage" id=49447dba-f7a5-4560-8c5b-cec76a0cba33 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:47:56 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:47:56.902481432Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-force-systemd-flag-987043_kube-system_0b193b30e948bd906aac384cbb54aeac_0" id=49447dba-f7a5-4560-8c5b-cec76a0cba33 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:48:00 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:48:00.875988923Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=6d20dbed-c38c-4020-8365-c1ba66c84d25 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:48:00 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:48:00.876851899Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=2607ec92-ef52-4d69-94d3-2f4e69622cdf name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:48:00 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:48:00.877757467Z" level=info msg="Creating container: kube-system/kube-apiserver-force-systemd-flag-987043/kube-apiserver" id=2d211bfd-ade3-4cda-ac1f-66cc914666cc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:48:00 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:48:00.878123881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:48:00 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:48:00.882551218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:48:00 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:48:00.883187681Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:48:00 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:48:00.89864451Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2d211bfd-ade3-4cda-ac1f-66cc914666cc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:48:00 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:48:00.900102916Z" level=info msg="createCtr: deleting container ID 530b4106035f15b475144488c70777a2362bb53e45b6fa5967faec4d5f7d5874 from idIndex" id=2d211bfd-ade3-4cda-ac1f-66cc914666cc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:48:00 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:48:00.900142899Z" level=info msg="createCtr: removing container 530b4106035f15b475144488c70777a2362bb53e45b6fa5967faec4d5f7d5874" id=2d211bfd-ade3-4cda-ac1f-66cc914666cc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:48:00 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:48:00.900179952Z" level=info msg="createCtr: deleting container 530b4106035f15b475144488c70777a2362bb53e45b6fa5967faec4d5f7d5874 from storage" id=2d211bfd-ade3-4cda-ac1f-66cc914666cc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:48:00 force-systemd-flag-987043 crio[836]: time="2025-10-02T21:48:00.902926907Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-force-systemd-flag-987043_kube-system_1f22d4a1e76ac76e8869eed45153c788_0" id=2d211bfd-ade3-4cda-ac1f-66cc914666cc name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:48:03.111917    2482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:03.112709    2482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:03.114398    2482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:03.115227    2482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:48:03.116842    2482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 21:08] overlayfs: idmapped layers are currently not supported
	[  +3.176407] overlayfs: idmapped layers are currently not supported
	[ +43.828152] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:09] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:15] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:18] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:21] overlayfs: idmapped layers are currently not supported
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:48:03 up  6:30,  0 user,  load average: 0.78, 0.72, 1.28
	Linux force-systemd-flag-987043 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:47:55 force-systemd-flag-987043 kubelet[1784]: E1002 21:47:55.896685    1784 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:47:55 force-systemd-flag-987043 kubelet[1784]:         container kube-controller-manager start failed in pod kube-controller-manager-force-systemd-flag-987043_kube-system(7a7641f62a81953a4bf322c321eb1ed8): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:47:55 force-systemd-flag-987043 kubelet[1784]:  > logger="UnhandledError"
	Oct 02 21:47:55 force-systemd-flag-987043 kubelet[1784]: E1002 21:47:55.896823    1784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-force-systemd-flag-987043" podUID="7a7641f62a81953a4bf322c321eb1ed8"
	Oct 02 21:47:56 force-systemd-flag-987043 kubelet[1784]: E1002 21:47:56.875462    1784 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-987043\" not found" node="force-systemd-flag-987043"
	Oct 02 21:47:56 force-systemd-flag-987043 kubelet[1784]: E1002 21:47:56.902922    1784 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:47:56 force-systemd-flag-987043 kubelet[1784]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:47:56 force-systemd-flag-987043 kubelet[1784]:  > podSandboxID="c13cbfee87f2d610848a0ac10f294c6eef74fbc255c46cb9046e926739f8f55f"
	Oct 02 21:47:56 force-systemd-flag-987043 kubelet[1784]: E1002 21:47:56.903024    1784 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:47:56 force-systemd-flag-987043 kubelet[1784]:         container kube-scheduler start failed in pod kube-scheduler-force-systemd-flag-987043_kube-system(0b193b30e948bd906aac384cbb54aeac): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:47:56 force-systemd-flag-987043 kubelet[1784]:  > logger="UnhandledError"
	Oct 02 21:47:56 force-systemd-flag-987043 kubelet[1784]: E1002 21:47:56.903056    1784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-force-systemd-flag-987043" podUID="0b193b30e948bd906aac384cbb54aeac"
	Oct 02 21:47:57 force-systemd-flag-987043 kubelet[1784]: E1002 21:47:57.506779    1784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.85.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-flag-987043?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:47:57 force-systemd-flag-987043 kubelet[1784]: I1002 21:47:57.698276    1784 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-flag-987043"
	Oct 02 21:47:57 force-systemd-flag-987043 kubelet[1784]: E1002 21:47:57.698661    1784 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.85.2:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="force-systemd-flag-987043"
	Oct 02 21:48:00 force-systemd-flag-987043 kubelet[1784]: E1002 21:48:00.875560    1784 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-987043\" not found" node="force-systemd-flag-987043"
	Oct 02 21:48:00 force-systemd-flag-987043 kubelet[1784]: E1002 21:48:00.903216    1784 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:48:00 force-systemd-flag-987043 kubelet[1784]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:48:00 force-systemd-flag-987043 kubelet[1784]:  > podSandboxID="430f834fd565086a7068b51194fd2129707fb843b3e7132692888222f6b2cd19"
	Oct 02 21:48:00 force-systemd-flag-987043 kubelet[1784]: E1002 21:48:00.903309    1784 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:48:00 force-systemd-flag-987043 kubelet[1784]:         container kube-apiserver start failed in pod kube-apiserver-force-systemd-flag-987043_kube-system(1f22d4a1e76ac76e8869eed45153c788): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:48:00 force-systemd-flag-987043 kubelet[1784]:  > logger="UnhandledError"
	Oct 02 21:48:00 force-systemd-flag-987043 kubelet[1784]: E1002 21:48:00.903350    1784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-force-systemd-flag-987043" podUID="1f22d4a1e76ac76e8869eed45153c788"
	Oct 02 21:48:00 force-systemd-flag-987043 kubelet[1784]: E1002 21:48:00.931728    1784 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-flag-987043\" not found"
	Oct 02 21:48:00 force-systemd-flag-987043 kubelet[1784]: E1002 21:48:00.938211    1784 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.85.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-flag-987043.186acaab2804a641  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-flag-987043,UID:force-systemd-flag-987043,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-flag-987043 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-flag-987043,},FirstTimestamp:2025-10-02 21:44:00.901211713 +0000 UTC m=+1.171004039,LastTimestamp:2025-10-02 21:44:00.901211713 +0000 UTC m=+1.171004039,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:k
ubelet,ReportingInstance:force-systemd-flag-987043,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-987043 -n force-systemd-flag-987043
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-987043 -n force-systemd-flag-987043: exit status 6 (340.540265ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:48:03.580059 1171817 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-987043" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-flag-987043" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-987043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-987043
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-987043: (1.938787418s)
--- FAIL: TestForceSystemdFlag (516.02s)

                                                
                                    
x
+
TestForceSystemdEnv (513.86s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-916563 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1002 21:41:34.539763  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:42:57.608795  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:44:25.747310  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:46:34.539706  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-916563 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m30.312399219s)

                                                
                                                
-- stdout --
	* [force-systemd-env-916563] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-916563" primary control-plane node in "force-systemd-env-916563" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:41:13.526118 1167760 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:41:13.526368 1167760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:41:13.526404 1167760 out.go:374] Setting ErrFile to fd 2...
	I1002 21:41:13.526424 1167760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:41:13.526793 1167760 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:41:13.527299 1167760 out.go:368] Setting JSON to false
	I1002 21:41:13.528271 1167760 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23011,"bootTime":1759418263,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:41:13.528394 1167760 start.go:140] virtualization:  
	I1002 21:41:13.531822 1167760 out.go:179] * [force-systemd-env-916563] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:41:13.535594 1167760 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:41:13.535718 1167760 notify.go:221] Checking for updates...
	I1002 21:41:13.541528 1167760 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:41:13.544342 1167760 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:41:13.547163 1167760 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:41:13.550103 1167760 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:41:13.552919 1167760 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1002 21:41:13.556331 1167760 config.go:182] Loaded profile config "force-systemd-flag-987043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:41:13.556447 1167760 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:41:13.581246 1167760 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:41:13.581368 1167760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:41:13.640325 1167760 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:41:13.631144649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:41:13.640465 1167760 docker.go:319] overlay module found
	I1002 21:41:13.643520 1167760 out.go:179] * Using the docker driver based on user configuration
	I1002 21:41:13.646294 1167760 start.go:306] selected driver: docker
	I1002 21:41:13.646311 1167760 start.go:936] validating driver "docker" against <nil>
	I1002 21:41:13.646323 1167760 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:41:13.647052 1167760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:41:13.699297 1167760 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:41:13.690250224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:41:13.699447 1167760 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:41:13.699677 1167760 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 21:41:13.702543 1167760 out.go:179] * Using Docker driver with root privileges
	I1002 21:41:13.705276 1167760 cni.go:84] Creating CNI manager for ""
	I1002 21:41:13.705354 1167760 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:41:13.705371 1167760 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:41:13.705442 1167760 start.go:350] cluster config:
	{Name:force-systemd-env-916563 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-916563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:41:13.708462 1167760 out.go:179] * Starting "force-systemd-env-916563" primary control-plane node in "force-systemd-env-916563" cluster
	I1002 21:41:13.711271 1167760 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:41:13.714102 1167760 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:41:13.716850 1167760 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:41:13.716907 1167760 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:41:13.716920 1167760 cache.go:59] Caching tarball of preloaded images
	I1002 21:41:13.716951 1167760 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:41:13.717005 1167760 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:41:13.717016 1167760 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:41:13.717131 1167760 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/config.json ...
	I1002 21:41:13.717149 1167760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/config.json: {Name:mk42f6fcb04d33c6273bdcf1dbad80753d27d2ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:13.739826 1167760 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:41:13.739850 1167760 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:41:13.739867 1167760 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:41:13.739890 1167760 start.go:361] acquireMachinesLock for force-systemd-env-916563: {Name:mk6e2386a359293ea9595f3ba293d6807d5cc6e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:41:13.740000 1167760 start.go:365] duration metric: took 92.74µs to acquireMachinesLock for "force-systemd-env-916563"
	I1002 21:41:13.740028 1167760 start.go:94] Provisioning new machine with config: &{Name:force-systemd-env-916563 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-916563 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:41:13.740096 1167760 start.go:126] createHost starting for "" (driver="docker")
	I1002 21:41:13.743360 1167760 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:41:13.743582 1167760 start.go:160] libmachine.API.Create for "force-systemd-env-916563" (driver="docker")
	I1002 21:41:13.743612 1167760 client.go:168] LocalClient.Create starting
	I1002 21:41:13.743687 1167760 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem
	I1002 21:41:13.743721 1167760 main.go:141] libmachine: Decoding PEM data...
	I1002 21:41:13.743734 1167760 main.go:141] libmachine: Parsing certificate...
	I1002 21:41:13.743787 1167760 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem
	I1002 21:41:13.743805 1167760 main.go:141] libmachine: Decoding PEM data...
	I1002 21:41:13.743824 1167760 main.go:141] libmachine: Parsing certificate...
	I1002 21:41:13.744162 1167760 cli_runner.go:164] Run: docker network inspect force-systemd-env-916563 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:41:13.760111 1167760 cli_runner.go:211] docker network inspect force-systemd-env-916563 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:41:13.760187 1167760 network_create.go:284] running [docker network inspect force-systemd-env-916563] to gather additional debugging logs...
	I1002 21:41:13.760216 1167760 cli_runner.go:164] Run: docker network inspect force-systemd-env-916563
	W1002 21:41:13.775212 1167760 cli_runner.go:211] docker network inspect force-systemd-env-916563 returned with exit code 1
	I1002 21:41:13.775247 1167760 network_create.go:287] error running [docker network inspect force-systemd-env-916563]: docker network inspect force-systemd-env-916563: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-916563 not found
	I1002 21:41:13.775260 1167760 network_create.go:289] output of [docker network inspect force-systemd-env-916563]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-916563 not found
	
	** /stderr **
	I1002 21:41:13.775356 1167760 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:41:13.791682 1167760 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c06a83b4618b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:65:5b:88:54:7f} reservation:<nil>}
	I1002 21:41:13.792046 1167760 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-478a83a9ba8a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:87:10:56:a0:1c} reservation:<nil>}
	I1002 21:41:13.792289 1167760 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb579a1208f3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:a8:08:2c:81:8c} reservation:<nil>}
	I1002 21:41:13.792724 1167760 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a00100}
	I1002 21:41:13.792745 1167760 network_create.go:124] attempt to create docker network force-systemd-env-916563 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1002 21:41:13.792806 1167760 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-916563 force-systemd-env-916563
	I1002 21:41:13.857363 1167760 network_create.go:108] docker network force-systemd-env-916563 192.168.76.0/24 created
	I1002 21:41:13.857397 1167760 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-916563" container
	I1002 21:41:13.857470 1167760 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:41:13.873607 1167760 cli_runner.go:164] Run: docker volume create force-systemd-env-916563 --label name.minikube.sigs.k8s.io=force-systemd-env-916563 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:41:13.892250 1167760 oci.go:103] Successfully created a docker volume force-systemd-env-916563
	I1002 21:41:13.892341 1167760 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-916563-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-916563 --entrypoint /usr/bin/test -v force-systemd-env-916563:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:41:14.414389 1167760 oci.go:107] Successfully prepared a docker volume force-systemd-env-916563
	I1002 21:41:14.414450 1167760 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:41:14.414476 1167760 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:41:14.414572 1167760 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-916563:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:41:18.840991 1167760 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-916563:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.426377379s)
	I1002 21:41:18.841027 1167760 kic.go:203] duration metric: took 4.426547155s to extract preloaded images to volume ...
	W1002 21:41:18.841187 1167760 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:41:18.841298 1167760 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:41:18.895627 1167760 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-916563 --name force-systemd-env-916563 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-916563 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-916563 --network force-systemd-env-916563 --ip 192.168.76.2 --volume force-systemd-env-916563:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:41:19.220597 1167760 cli_runner.go:164] Run: docker container inspect force-systemd-env-916563 --format={{.State.Running}}
	I1002 21:41:19.247769 1167760 cli_runner.go:164] Run: docker container inspect force-systemd-env-916563 --format={{.State.Status}}
	I1002 21:41:19.266993 1167760 cli_runner.go:164] Run: docker exec force-systemd-env-916563 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:41:19.316339 1167760 oci.go:144] the created container "force-systemd-env-916563" has a running status.
	I1002 21:41:19.316375 1167760 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa...
	I1002 21:41:19.516947 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:41:19.517006 1167760 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:41:19.539135 1167760 cli_runner.go:164] Run: docker container inspect force-systemd-env-916563 --format={{.State.Status}}
	I1002 21:41:19.563745 1167760 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:41:19.563770 1167760 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-916563 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:41:19.635876 1167760 cli_runner.go:164] Run: docker container inspect force-systemd-env-916563 --format={{.State.Status}}
	I1002 21:41:19.662759 1167760 machine.go:93] provisionDockerMachine start ...
	I1002 21:41:19.662863 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:19.699911 1167760 main.go:141] libmachine: Using SSH client type: native
	I1002 21:41:19.700246 1167760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34166 <nil> <nil>}
	I1002 21:41:19.700261 1167760 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:41:19.700863 1167760 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59434->127.0.0.1:34166: read: connection reset by peer
	I1002 21:41:22.841703 1167760 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-916563
	
	I1002 21:41:22.841727 1167760 ubuntu.go:182] provisioning hostname "force-systemd-env-916563"
	I1002 21:41:22.841789 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:22.860626 1167760 main.go:141] libmachine: Using SSH client type: native
	I1002 21:41:22.860957 1167760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34166 <nil> <nil>}
	I1002 21:41:22.860970 1167760 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-916563 && echo "force-systemd-env-916563" | sudo tee /etc/hostname
	I1002 21:41:23.000199 1167760 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-916563
	
	I1002 21:41:23.000320 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:23.018533 1167760 main.go:141] libmachine: Using SSH client type: native
	I1002 21:41:23.018867 1167760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34166 <nil> <nil>}
	I1002 21:41:23.018891 1167760 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-916563' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-916563/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-916563' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:41:23.150161 1167760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:41:23.150191 1167760 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:41:23.150233 1167760 ubuntu.go:190] setting up certificates
	I1002 21:41:23.150245 1167760 provision.go:84] configureAuth start
	I1002 21:41:23.150307 1167760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-916563
	I1002 21:41:23.167203 1167760 provision.go:143] copyHostCerts
	I1002 21:41:23.167245 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:41:23.167277 1167760 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:41:23.167288 1167760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:41:23.167367 1167760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:41:23.167460 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:41:23.167481 1167760 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:41:23.167486 1167760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:41:23.167523 1167760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:41:23.167568 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:41:23.167588 1167760 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:41:23.167595 1167760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:41:23.167624 1167760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:41:23.167673 1167760 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-916563 san=[127.0.0.1 192.168.76.2 force-systemd-env-916563 localhost minikube]
	I1002 21:41:23.648409 1167760 provision.go:177] copyRemoteCerts
	I1002 21:41:23.648477 1167760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:41:23.648524 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:23.666817 1167760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34166 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa Username:docker}
	I1002 21:41:23.761548 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:41:23.761609 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:41:23.778848 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:41:23.778910 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 21:41:23.796599 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:41:23.796659 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:41:23.813574 1167760 provision.go:87] duration metric: took 663.306637ms to configureAuth
	I1002 21:41:23.813604 1167760 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:41:23.813777 1167760 config.go:182] Loaded profile config "force-systemd-env-916563": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:41:23.813887 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:23.832027 1167760 main.go:141] libmachine: Using SSH client type: native
	I1002 21:41:23.832334 1167760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34166 <nil> <nil>}
	I1002 21:41:23.832354 1167760 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:41:24.075649 1167760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:41:24.075670 1167760 machine.go:96] duration metric: took 4.412889984s to provisionDockerMachine
	I1002 21:41:24.075681 1167760 client.go:171] duration metric: took 10.332062658s to LocalClient.Create
	I1002 21:41:24.075694 1167760 start.go:168] duration metric: took 10.332113282s to libmachine.API.Create "force-systemd-env-916563"
	I1002 21:41:24.075710 1167760 start.go:294] postStartSetup for "force-systemd-env-916563" (driver="docker")
	I1002 21:41:24.075721 1167760 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:41:24.075789 1167760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:41:24.075857 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:24.094593 1167760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34166 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa Username:docker}
	I1002 21:41:24.190171 1167760 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:41:24.193414 1167760 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:41:24.193441 1167760 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:41:24.193453 1167760 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:41:24.193508 1167760 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:41:24.193606 1167760 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:41:24.193617 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> /etc/ssl/certs/9939542.pem
	I1002 21:41:24.193715 1167760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:41:24.201147 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:41:24.219312 1167760 start.go:297] duration metric: took 143.586031ms for postStartSetup
	I1002 21:41:24.219736 1167760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-916563
	I1002 21:41:24.236366 1167760 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/config.json ...
	I1002 21:41:24.236659 1167760 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:41:24.236713 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:24.252905 1167760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34166 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa Username:docker}
	I1002 21:41:24.347803 1167760 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:41:24.352771 1167760 start.go:129] duration metric: took 10.612659192s to createHost
	I1002 21:41:24.352795 1167760 start.go:84] releasing machines lock for "force-systemd-env-916563", held for 10.61278458s
	I1002 21:41:24.352866 1167760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-916563
	I1002 21:41:24.370728 1167760 ssh_runner.go:195] Run: cat /version.json
	I1002 21:41:24.370778 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:24.370787 1167760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:41:24.370894 1167760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-916563
	I1002 21:41:24.397642 1167760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34166 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa Username:docker}
	I1002 21:41:24.398838 1167760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34166 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/force-systemd-env-916563/id_rsa Username:docker}
	I1002 21:41:24.493585 1167760 ssh_runner.go:195] Run: systemctl --version
	I1002 21:41:24.584941 1167760 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:41:24.623106 1167760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:41:24.627403 1167760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:41:24.627480 1167760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:41:24.655910 1167760 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 21:41:24.655933 1167760 start.go:496] detecting cgroup driver to use...
	I1002 21:41:24.655961 1167760 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1002 21:41:24.656097 1167760 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:41:24.673332 1167760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:41:24.685482 1167760 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:41:24.685546 1167760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:41:24.701404 1167760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:41:24.719814 1167760 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:41:24.832703 1167760 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:41:24.965911 1167760 docker.go:234] disabling docker service ...
	I1002 21:41:24.966027 1167760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:41:24.989018 1167760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:41:25.006405 1167760 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:41:25.126216 1167760 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:41:25.267392 1167760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:41:25.280528 1167760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:41:25.295989 1167760 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:41:25.296078 1167760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:41:25.304855 1167760 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:41:25.304928 1167760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:41:25.314194 1167760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:41:25.322929 1167760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:41:25.332130 1167760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:41:25.341430 1167760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:41:25.350060 1167760 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:41:25.363502 1167760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:41:25.372334 1167760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:41:25.380192 1167760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:41:25.387523 1167760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:41:25.503123 1167760 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:41:25.624407 1167760 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:41:25.624538 1167760 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:41:25.628403 1167760 start.go:564] Will wait 60s for crictl version
	I1002 21:41:25.628523 1167760 ssh_runner.go:195] Run: which crictl
	I1002 21:41:25.632063 1167760 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:41:25.656087 1167760 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:41:25.656231 1167760 ssh_runner.go:195] Run: crio --version
	I1002 21:41:25.685560 1167760 ssh_runner.go:195] Run: crio --version
	I1002 21:41:25.718977 1167760 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:41:25.721842 1167760 cli_runner.go:164] Run: docker network inspect force-systemd-env-916563 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:41:25.736455 1167760 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 21:41:25.740329 1167760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:41:25.750635 1167760 kubeadm.go:883] updating cluster {Name:force-systemd-env-916563 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-916563 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:41:25.750739 1167760 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:41:25.750813 1167760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:41:25.781699 1167760 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:41:25.781724 1167760 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:41:25.781778 1167760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:41:25.811194 1167760 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:41:25.811220 1167760 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:41:25.811229 1167760 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 21:41:25.811316 1167760 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-916563 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-916563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:41:25.811399 1167760 ssh_runner.go:195] Run: crio config
	I1002 21:41:25.869918 1167760 cni.go:84] Creating CNI manager for ""
	I1002 21:41:25.869986 1167760 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:41:25.870011 1167760 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:41:25.870055 1167760 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-916563 NodeName:force-systemd-env-916563 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:41:25.870187 1167760 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-916563"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:41:25.870261 1167760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:41:25.878016 1167760 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:41:25.878115 1167760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:41:25.885808 1167760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1002 21:41:25.899640 1167760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:41:25.912986 1167760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1002 21:41:25.926270 1167760 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:41:25.929814 1167760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:41:25.939570 1167760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:41:26.048724 1167760 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:41:26.067276 1167760 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563 for IP: 192.168.76.2
	I1002 21:41:26.067350 1167760 certs.go:195] generating shared ca certs ...
	I1002 21:41:26.067394 1167760 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:26.067613 1167760 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:41:26.067685 1167760 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:41:26.067717 1167760 certs.go:257] generating profile certs ...
	I1002 21:41:26.067802 1167760 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/client.key
	I1002 21:41:26.067864 1167760 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/client.crt with IP's: []
	I1002 21:41:26.472814 1167760 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/client.crt ...
	I1002 21:41:26.472848 1167760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/client.crt: {Name:mk28822afcf9bad2ac6c923a9b9bd4fd0c35fa0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:26.473074 1167760 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/client.key ...
	I1002 21:41:26.473093 1167760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/client.key: {Name:mk40d533e68d8c195fb943dc97a94c16464457d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:26.473193 1167760 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.key.d945a3da
	I1002 21:41:26.473211 1167760 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.crt.d945a3da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 21:41:26.743202 1167760 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.crt.d945a3da ...
	I1002 21:41:26.743236 1167760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.crt.d945a3da: {Name:mkee7a3f859bac4775b0e36f24ab5f2e4de6964c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:26.743421 1167760 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.key.d945a3da ...
	I1002 21:41:26.743431 1167760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.key.d945a3da: {Name:mk0b688a96928dec869a0fdf46894c4c4bdda420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:26.743503 1167760 certs.go:382] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.crt.d945a3da -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.crt
	I1002 21:41:26.743575 1167760 certs.go:386] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.key.d945a3da -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.key
	I1002 21:41:26.743639 1167760 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.key
	I1002 21:41:26.743657 1167760 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.crt with IP's: []
	I1002 21:41:27.611411 1167760 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.crt ...
	I1002 21:41:27.611444 1167760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.crt: {Name:mk8c0096430bb7219abb411c1130902c8db84c76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:27.612245 1167760 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.key ...
	I1002 21:41:27.612288 1167760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.key: {Name:mk42a118ebcffd268fa7380b82f2960394325c4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:41:27.612415 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:41:27.612592 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:41:27.612636 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:41:27.612755 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:41:27.612792 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:41:27.612822 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:41:27.612853 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:41:27.613006 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:41:27.613102 1167760 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:41:27.613170 1167760 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:41:27.613197 1167760 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:41:27.613249 1167760 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:41:27.613298 1167760 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:41:27.613500 1167760 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:41:27.613578 1167760 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:41:27.613636 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem -> /usr/share/ca-certificates/993954.pem
	I1002 21:41:27.613670 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> /usr/share/ca-certificates/9939542.pem
	I1002 21:41:27.613709 1167760 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:41:27.614351 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:41:27.639026 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:41:27.658347 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:41:27.676042 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:41:27.693937 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1002 21:41:27.712753 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:41:27.731105 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:41:27.750164 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/force-systemd-env-916563/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:41:27.771461 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:41:27.789038 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:41:27.806804 1167760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:41:27.824615 1167760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:41:27.837751 1167760 ssh_runner.go:195] Run: openssl version
	I1002 21:41:27.844946 1167760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:41:27.855279 1167760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:41:27.859593 1167760 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:41:27.859685 1167760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:41:27.900868 1167760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:41:27.909619 1167760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:41:27.918252 1167760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:41:27.922660 1167760 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:41:27.922766 1167760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:41:27.963922 1167760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:41:27.972186 1167760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:41:27.980557 1167760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:41:27.984370 1167760 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:41:27.984454 1167760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:41:28.025686 1167760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:41:28.034521 1167760 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:41:28.038421 1167760 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:41:28.038479 1167760 kubeadm.go:400] StartCluster: {Name:force-systemd-env-916563 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-916563 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:41:28.038568 1167760 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:41:28.038629 1167760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:41:28.065114 1167760 cri.go:89] found id: ""
	I1002 21:41:28.065213 1167760 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:41:28.073495 1167760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:41:28.081908 1167760 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:41:28.082014 1167760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:41:28.090366 1167760 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:41:28.090395 1167760 kubeadm.go:157] found existing configuration files:
	
	I1002 21:41:28.090449 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:41:28.099740 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:41:28.099821 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:41:28.108202 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:41:28.116025 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:41:28.116118 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:41:28.123603 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:41:28.131368 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:41:28.131452 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:41:28.138780 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:41:28.146425 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:41:28.146501 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:41:28.153931 1167760 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:41:28.193684 1167760 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:41:28.194025 1167760 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:41:28.244247 1167760 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:41:28.244344 1167760 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:41:28.244403 1167760 kubeadm.go:318] OS: Linux
	I1002 21:41:28.244471 1167760 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:41:28.244555 1167760 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:41:28.244620 1167760 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:41:28.244687 1167760 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:41:28.244762 1167760 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:41:28.244830 1167760 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:41:28.244892 1167760 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:41:28.244957 1167760 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:41:28.245022 1167760 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:41:28.336182 1167760 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:41:28.336334 1167760 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:41:28.336462 1167760 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:41:28.345250 1167760 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:41:28.352113 1167760 out.go:252]   - Generating certificates and keys ...
	I1002 21:41:28.352246 1167760 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:41:28.352335 1167760 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:41:28.798455 1167760 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:41:28.924958 1167760 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:41:29.244119 1167760 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:41:29.704228 1167760 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:41:30.136741 1167760 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:41:30.136919 1167760 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-916563 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 21:41:30.449578 1167760 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:41:30.449743 1167760 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-916563 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 21:41:31.433813 1167760 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:41:32.045986 1167760 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:41:32.285256 1167760 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:41:32.285552 1167760 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:41:32.664229 1167760 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:41:33.338585 1167760 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:41:34.011741 1167760 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:41:34.439290 1167760 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:41:34.950910 1167760 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:41:34.951737 1167760 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:41:34.954522 1167760 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:41:34.958198 1167760 out.go:252]   - Booting up control plane ...
	I1002 21:41:34.958301 1167760 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:41:34.958383 1167760 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:41:34.958454 1167760 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:41:34.973786 1167760 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:41:34.974181 1167760 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:41:34.982415 1167760 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:41:34.982719 1167760 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:41:34.982946 1167760 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:41:35.126635 1167760 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:41:35.126768 1167760 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:41:37.125213 1167760 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.00099988s
	I1002 21:41:37.128881 1167760 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:41:37.128980 1167760 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 21:41:37.129402 1167760 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:41:37.129492 1167760 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:45:37.129380 1167760 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000136359s
	I1002 21:45:37.129502 1167760 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000107822s
	I1002 21:45:37.130411 1167760 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000411182s
	I1002 21:45:37.130434 1167760 kubeadm.go:318] 
	I1002 21:45:37.130530 1167760 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:45:37.130619 1167760 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:45:37.130715 1167760 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:45:37.130814 1167760 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:45:37.130892 1167760 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:45:37.130974 1167760 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:45:37.130979 1167760 kubeadm.go:318] 
	I1002 21:45:37.134501 1167760 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:45:37.134792 1167760 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:45:37.134916 1167760 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:45:37.135541 1167760 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:45:37.135617 1167760 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:45:37.135750 1167760 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-916563 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-916563 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.00099988s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136359s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000107822s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000411182s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-916563 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-916563 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.00099988s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136359s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000107822s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000411182s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:45:37.135836 1167760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:45:37.699135 1167760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:45:37.712262 1167760 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:45:37.712349 1167760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:45:37.720500 1167760 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:45:37.720519 1167760 kubeadm.go:157] found existing configuration files:
	
	I1002 21:45:37.720593 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:45:37.728334 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:45:37.728423 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:45:37.735753 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:45:37.743942 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:45:37.744022 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:45:37.751610 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:45:37.759054 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:45:37.759116 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:45:37.766332 1167760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:45:37.774856 1167760 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:45:37.774947 1167760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:45:37.782875 1167760 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:45:37.824782 1167760 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:45:37.824843 1167760 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:45:37.850631 1167760 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:45:37.850712 1167760 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:45:37.850752 1167760 kubeadm.go:318] OS: Linux
	I1002 21:45:37.850804 1167760 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:45:37.850858 1167760 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:45:37.850911 1167760 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:45:37.850966 1167760 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:45:37.851024 1167760 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:45:37.851084 1167760 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:45:37.851136 1167760 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:45:37.851190 1167760 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:45:37.851243 1167760 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:45:37.920925 1167760 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:45:37.921059 1167760 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:45:37.921184 1167760 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:45:37.928860 1167760 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:45:37.936006 1167760 out.go:252]   - Generating certificates and keys ...
	I1002 21:45:37.936107 1167760 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:45:37.936178 1167760 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:45:37.936307 1167760 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:45:37.936434 1167760 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:45:37.936514 1167760 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:45:37.936574 1167760 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:45:37.936656 1167760 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:45:37.936760 1167760 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:45:37.936893 1167760 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:45:37.936989 1167760 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:45:37.937045 1167760 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:45:37.937118 1167760 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:45:39.356212 1167760 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:45:39.970369 1167760 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:45:40.707350 1167760 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:45:40.843823 1167760 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:45:41.587742 1167760 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:45:41.588613 1167760 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:45:41.591474 1167760 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:45:41.594686 1167760 out.go:252]   - Booting up control plane ...
	I1002 21:45:41.594798 1167760 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:45:41.594895 1167760 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:45:41.596974 1167760 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:45:41.612529 1167760 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:45:41.612649 1167760 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:45:41.621829 1167760 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:45:41.622258 1167760 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:45:41.622559 1167760 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:45:41.764129 1167760 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:45:41.764269 1167760 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:45:43.265873 1167760 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501822128s
	I1002 21:45:43.269408 1167760 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:45:43.269512 1167760 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 21:45:43.269640 1167760 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:45:43.269729 1167760 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:49:43.270226 1167760 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000666018s
	I1002 21:49:43.270332 1167760 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000730378s
	I1002 21:49:43.273860 1167760 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000020595s
	I1002 21:49:43.273887 1167760 kubeadm.go:318] 
	I1002 21:49:43.274064 1167760 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:49:43.274584 1167760 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:49:43.274755 1167760 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:49:43.274930 1167760 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:49:43.275065 1167760 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:49:43.275220 1167760 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:49:43.275226 1167760 kubeadm.go:318] 
	I1002 21:49:43.277041 1167760 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:49:43.277303 1167760 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:49:43.277457 1167760 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:49:43.278086 1167760 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 21:49:43.278168 1167760 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:49:43.278227 1167760 kubeadm.go:402] duration metric: took 8m15.239753492s to StartCluster
	I1002 21:49:43.278265 1167760 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:49:43.278335 1167760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:49:43.303452 1167760 cri.go:89] found id: ""
	I1002 21:49:43.303487 1167760 logs.go:282] 0 containers: []
	W1002 21:49:43.303495 1167760 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:49:43.303502 1167760 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:49:43.303561 1167760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:49:43.332557 1167760 cri.go:89] found id: ""
	I1002 21:49:43.332636 1167760 logs.go:282] 0 containers: []
	W1002 21:49:43.332660 1167760 logs.go:284] No container was found matching "etcd"
	I1002 21:49:43.332679 1167760 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:49:43.332769 1167760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:49:43.359893 1167760 cri.go:89] found id: ""
	I1002 21:49:43.359923 1167760 logs.go:282] 0 containers: []
	W1002 21:49:43.359932 1167760 logs.go:284] No container was found matching "coredns"
	I1002 21:49:43.359944 1167760 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:49:43.360004 1167760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:49:43.390819 1167760 cri.go:89] found id: ""
	I1002 21:49:43.390844 1167760 logs.go:282] 0 containers: []
	W1002 21:49:43.390853 1167760 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:49:43.390860 1167760 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:49:43.390918 1167760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:49:43.416350 1167760 cri.go:89] found id: ""
	I1002 21:49:43.416376 1167760 logs.go:282] 0 containers: []
	W1002 21:49:43.416385 1167760 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:49:43.416400 1167760 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:49:43.416465 1167760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:49:43.451083 1167760 cri.go:89] found id: ""
	I1002 21:49:43.451114 1167760 logs.go:282] 0 containers: []
	W1002 21:49:43.451123 1167760 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:49:43.451159 1167760 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:49:43.451235 1167760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:49:43.481350 1167760 cri.go:89] found id: ""
	I1002 21:49:43.481381 1167760 logs.go:282] 0 containers: []
	W1002 21:49:43.481390 1167760 logs.go:284] No container was found matching "kindnet"
	I1002 21:49:43.481399 1167760 logs.go:123] Gathering logs for kubelet ...
	I1002 21:49:43.481410 1167760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:49:43.567759 1167760 logs.go:123] Gathering logs for dmesg ...
	I1002 21:49:43.567796 1167760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 21:49:43.586454 1167760 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:49:43.586480 1167760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:49:43.657452 1167760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:49:43.649649    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.650186    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.651725    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.652150    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.653340    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:49:43.649649    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.650186    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.651725    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.652150    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.653340    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:49:43.657478 1167760 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:49:43.657490 1167760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:49:43.734110 1167760 logs.go:123] Gathering logs for container status ...
	I1002 21:49:43.734144 1167760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 21:49:43.763948 1167760 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501822128s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000666018s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000730378s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000020595s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:49:43.764011 1167760 out.go:285] * 
	* 
	W1002 21:49:43.764097 1167760 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501822128s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000666018s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000730378s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000020595s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501822128s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000666018s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000730378s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000020595s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:49:43.764117 1167760 out.go:285] * 
	* 
	W1002 21:49:43.766672 1167760 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:49:43.772424 1167760 out.go:203] 
	W1002 21:49:43.775278 1167760 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501822128s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000666018s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000730378s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000020595s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501822128s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000666018s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000730378s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000020595s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:49:43.775317 1167760 out.go:285] * 
	* 
	I1002 21:49:43.778474 1167760 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-916563 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-10-02 21:49:43.828613161 +0000 UTC m=+5499.928549658
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-env-916563
helpers_test.go:243: (dbg) docker inspect force-systemd-env-916563:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0274c53e62dd805a217256f29f053fcfc049a6ede3516368d2a13260576252d2",
	        "Created": "2025-10-02T21:41:18.910273486Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1168173,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:41:18.992737543Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/0274c53e62dd805a217256f29f053fcfc049a6ede3516368d2a13260576252d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0274c53e62dd805a217256f29f053fcfc049a6ede3516368d2a13260576252d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/0274c53e62dd805a217256f29f053fcfc049a6ede3516368d2a13260576252d2/hosts",
	        "LogPath": "/var/lib/docker/containers/0274c53e62dd805a217256f29f053fcfc049a6ede3516368d2a13260576252d2/0274c53e62dd805a217256f29f053fcfc049a6ede3516368d2a13260576252d2-json.log",
	        "Name": "/force-systemd-env-916563",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "force-systemd-env-916563:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-916563",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0274c53e62dd805a217256f29f053fcfc049a6ede3516368d2a13260576252d2",
	                "LowerDir": "/var/lib/docker/overlay2/82c891ec0b8b1e0cc638ce38eb74d871a848f19be0a9ef19e5bcf677ed0ac9d5-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/82c891ec0b8b1e0cc638ce38eb74d871a848f19be0a9ef19e5bcf677ed0ac9d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/82c891ec0b8b1e0cc638ce38eb74d871a848f19be0a9ef19e5bcf677ed0ac9d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/82c891ec0b8b1e0cc638ce38eb74d871a848f19be0a9ef19e5bcf677ed0ac9d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-916563",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-916563/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-916563",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-916563",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-916563",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "48b25f34f8a4df8dd915421182a6e012f684e6df7f7e3d52c45ec335bbf69b84",
	            "SandboxKey": "/var/run/docker/netns/48b25f34f8a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34166"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34167"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34170"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34168"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34169"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-916563": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:71:5c:db:83:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3979be5540679e138acbb6ea780558a08bbc0772ae1f8d304870b3aab20eae37",
	                    "EndpointID": "d8348bb3c113baf67bd3393599111a31e55dffb2248a0c0f87b1ee71465d3ffd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-916563",
	                        "0274c53e62dd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-916563 -n force-systemd-env-916563
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-916563 -n force-systemd-env-916563: exit status 6 (307.907383ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:49:44.148710 1174901 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-916563" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-916563 logs -n 25
helpers_test.go:260: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-644857 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl status docker --all --full --no-pager                                      │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl cat docker --no-pager                                                      │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /etc/docker/daemon.json                                                          │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo docker system info                                                                   │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cri-dockerd --version                                                                │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl cat containerd --no-pager                                                  │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /etc/containerd/config.toml                                                      │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo containerd config dump                                                               │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl status crio --all --full --no-pager                                        │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl cat crio --no-pager                                                        │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo crio config                                                                          │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ delete  │ -p cilium-644857                                                                                           │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │ 02 Oct 25 21:41 UTC │
	│ start   │ -p force-systemd-env-916563 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-916563  │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ force-systemd-flag-987043 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-987043 │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	│ delete  │ -p force-systemd-flag-987043                                                                               │ force-systemd-flag-987043 │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	│ start   │ -p cert-expiration-955864 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio     │ cert-expiration-955864    │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:48:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:48:05.577167 1172202 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:48:05.577283 1172202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:48:05.577287 1172202 out.go:374] Setting ErrFile to fd 2...
	I1002 21:48:05.577290 1172202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:48:05.577527 1172202 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:48:05.577905 1172202 out.go:368] Setting JSON to false
	I1002 21:48:05.578876 1172202 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23423,"bootTime":1759418263,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:48:05.578937 1172202 start.go:140] virtualization:  
	I1002 21:48:05.582762 1172202 out.go:179] * [cert-expiration-955864] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:48:05.587175 1172202 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:48:05.587285 1172202 notify.go:221] Checking for updates...
	I1002 21:48:05.593739 1172202 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:48:05.596979 1172202 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:48:05.600024 1172202 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:48:05.603125 1172202 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:48:05.606207 1172202 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:48:05.609747 1172202 config.go:182] Loaded profile config "force-systemd-env-916563": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:48:05.609857 1172202 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:48:05.637021 1172202 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:48:05.637124 1172202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:48:05.697209 1172202 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:48:05.687919038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:48:05.697308 1172202 docker.go:319] overlay module found
	I1002 21:48:05.700513 1172202 out.go:179] * Using the docker driver based on user configuration
	I1002 21:48:05.703481 1172202 start.go:306] selected driver: docker
	I1002 21:48:05.703491 1172202 start.go:936] validating driver "docker" against <nil>
	I1002 21:48:05.703500 1172202 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:48:05.704238 1172202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:48:05.753750 1172202 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:48:05.744308433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:48:05.753886 1172202 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:48:05.754125 1172202 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 21:48:05.756993 1172202 out.go:179] * Using Docker driver with root privileges
	I1002 21:48:05.759791 1172202 cni.go:84] Creating CNI manager for ""
	I1002 21:48:05.759855 1172202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:48:05.759864 1172202 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:48:05.759934 1172202 start.go:350] cluster config:
	{Name:cert-expiration-955864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-955864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:48:05.763069 1172202 out.go:179] * Starting "cert-expiration-955864" primary control-plane node in "cert-expiration-955864" cluster
	I1002 21:48:05.766121 1172202 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:48:05.769019 1172202 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:48:05.771927 1172202 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:48:05.771979 1172202 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:48:05.772006 1172202 cache.go:59] Caching tarball of preloaded images
	I1002 21:48:05.772005 1172202 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:48:05.772096 1172202 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:48:05.772105 1172202 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:48:05.772210 1172202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/config.json ...
	I1002 21:48:05.772226 1172202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/config.json: {Name:mkfb781c6f2995fd65dd01edff5db0b719c0de90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:48:05.791371 1172202 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:48:05.791382 1172202 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:48:05.791400 1172202 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:48:05.791421 1172202 start.go:361] acquireMachinesLock for cert-expiration-955864: {Name:mk17ba83053c428e3e5a5b6dc8fe84c1b101dcdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:48:05.791544 1172202 start.go:365] duration metric: took 109.215µs to acquireMachinesLock for "cert-expiration-955864"
	I1002 21:48:05.791568 1172202 start.go:94] Provisioning new machine with config: &{Name:cert-expiration-955864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-955864 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:48:05.791631 1172202 start.go:126] createHost starting for "" (driver="docker")
	I1002 21:48:05.795158 1172202 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:48:05.795370 1172202 start.go:160] libmachine.API.Create for "cert-expiration-955864" (driver="docker")
	I1002 21:48:05.795406 1172202 client.go:168] LocalClient.Create starting
	I1002 21:48:05.795470 1172202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem
	I1002 21:48:05.795506 1172202 main.go:141] libmachine: Decoding PEM data...
	I1002 21:48:05.795517 1172202 main.go:141] libmachine: Parsing certificate...
	I1002 21:48:05.795573 1172202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem
	I1002 21:48:05.795589 1172202 main.go:141] libmachine: Decoding PEM data...
	I1002 21:48:05.795597 1172202 main.go:141] libmachine: Parsing certificate...
	I1002 21:48:05.795948 1172202 cli_runner.go:164] Run: docker network inspect cert-expiration-955864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:48:05.812477 1172202 cli_runner.go:211] docker network inspect cert-expiration-955864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:48:05.812566 1172202 network_create.go:284] running [docker network inspect cert-expiration-955864] to gather additional debugging logs...
	I1002 21:48:05.812581 1172202 cli_runner.go:164] Run: docker network inspect cert-expiration-955864
	W1002 21:48:05.828424 1172202 cli_runner.go:211] docker network inspect cert-expiration-955864 returned with exit code 1
	I1002 21:48:05.828448 1172202 network_create.go:287] error running [docker network inspect cert-expiration-955864]: docker network inspect cert-expiration-955864: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-955864 not found
	I1002 21:48:05.828461 1172202 network_create.go:289] output of [docker network inspect cert-expiration-955864]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-955864 not found
	
	** /stderr **
	I1002 21:48:05.828572 1172202 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:48:05.845672 1172202 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c06a83b4618b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:65:5b:88:54:7f} reservation:<nil>}
	I1002 21:48:05.846099 1172202 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-478a83a9ba8a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:87:10:56:a0:1c} reservation:<nil>}
	I1002 21:48:05.846360 1172202 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb579a1208f3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:a8:08:2c:81:8c} reservation:<nil>}
	I1002 21:48:05.846622 1172202 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3979be554067 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:76:08:b0:6e:9e:6a} reservation:<nil>}
	I1002 21:48:05.847039 1172202 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a32120}
	I1002 21:48:05.847055 1172202 network_create.go:124] attempt to create docker network cert-expiration-955864 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 21:48:05.847112 1172202 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-955864 cert-expiration-955864
	I1002 21:48:05.912688 1172202 network_create.go:108] docker network cert-expiration-955864 192.168.85.0/24 created
	I1002 21:48:05.912711 1172202 kic.go:121] calculated static IP "192.168.85.2" for the "cert-expiration-955864" container
	I1002 21:48:05.912790 1172202 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:48:05.932037 1172202 cli_runner.go:164] Run: docker volume create cert-expiration-955864 --label name.minikube.sigs.k8s.io=cert-expiration-955864 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:48:05.951360 1172202 oci.go:103] Successfully created a docker volume cert-expiration-955864
	I1002 21:48:05.951433 1172202 cli_runner.go:164] Run: docker run --rm --name cert-expiration-955864-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-955864 --entrypoint /usr/bin/test -v cert-expiration-955864:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:48:06.504071 1172202 oci.go:107] Successfully prepared a docker volume cert-expiration-955864
	I1002 21:48:06.504113 1172202 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:48:06.504132 1172202 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:48:06.504198 1172202 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-955864:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:48:10.898642 1172202 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-955864:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.394402588s)
	I1002 21:48:10.898663 1172202 kic.go:203] duration metric: took 4.394527696s to extract preloaded images to volume ...
	W1002 21:48:10.898805 1172202 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:48:10.898904 1172202 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:48:10.950614 1172202 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-955864 --name cert-expiration-955864 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-955864 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-955864 --network cert-expiration-955864 --ip 192.168.85.2 --volume cert-expiration-955864:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:48:11.252320 1172202 cli_runner.go:164] Run: docker container inspect cert-expiration-955864 --format={{.State.Running}}
	I1002 21:48:11.272885 1172202 cli_runner.go:164] Run: docker container inspect cert-expiration-955864 --format={{.State.Status}}
	I1002 21:48:11.297186 1172202 cli_runner.go:164] Run: docker exec cert-expiration-955864 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:48:11.350670 1172202 oci.go:144] the created container "cert-expiration-955864" has a running status.
	I1002 21:48:11.350700 1172202 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/cert-expiration-955864/id_rsa...
	I1002 21:48:11.652137 1172202 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-992084/.minikube/machines/cert-expiration-955864/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:48:11.683862 1172202 cli_runner.go:164] Run: docker container inspect cert-expiration-955864 --format={{.State.Status}}
	I1002 21:48:11.708432 1172202 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:48:11.708444 1172202 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-955864 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:48:11.761773 1172202 cli_runner.go:164] Run: docker container inspect cert-expiration-955864 --format={{.State.Status}}
	I1002 21:48:11.780659 1172202 machine.go:93] provisionDockerMachine start ...
	I1002 21:48:11.780760 1172202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-955864
	I1002 21:48:11.798785 1172202 main.go:141] libmachine: Using SSH client type: native
	I1002 21:48:11.799106 1172202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34171 <nil> <nil>}
	I1002 21:48:11.799114 1172202 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:48:11.799767 1172202 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41686->127.0.0.1:34171: read: connection reset by peer
	I1002 21:48:14.929711 1172202 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-955864
	
	I1002 21:48:14.929725 1172202 ubuntu.go:182] provisioning hostname "cert-expiration-955864"
	I1002 21:48:14.929791 1172202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-955864
	I1002 21:48:14.947159 1172202 main.go:141] libmachine: Using SSH client type: native
	I1002 21:48:14.947455 1172202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34171 <nil> <nil>}
	I1002 21:48:14.947464 1172202 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-955864 && echo "cert-expiration-955864" | sudo tee /etc/hostname
	I1002 21:48:15.104230 1172202 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-955864
	
	I1002 21:48:15.104303 1172202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-955864
	I1002 21:48:15.122532 1172202 main.go:141] libmachine: Using SSH client type: native
	I1002 21:48:15.122843 1172202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34171 <nil> <nil>}
	I1002 21:48:15.122858 1172202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-955864' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-955864/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-955864' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:48:15.258411 1172202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:48:15.258427 1172202 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:48:15.258448 1172202 ubuntu.go:190] setting up certificates
	I1002 21:48:15.258456 1172202 provision.go:84] configureAuth start
	I1002 21:48:15.258515 1172202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-955864
	I1002 21:48:15.276389 1172202 provision.go:143] copyHostCerts
	I1002 21:48:15.276457 1172202 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:48:15.276465 1172202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:48:15.276541 1172202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:48:15.276636 1172202 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:48:15.276640 1172202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:48:15.276664 1172202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:48:15.276720 1172202 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:48:15.276724 1172202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:48:15.276746 1172202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:48:15.276801 1172202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-955864 san=[127.0.0.1 192.168.85.2 cert-expiration-955864 localhost minikube]
	I1002 21:48:16.347506 1172202 provision.go:177] copyRemoteCerts
	I1002 21:48:16.347566 1172202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:48:16.347607 1172202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-955864
	I1002 21:48:16.365052 1172202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34171 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/cert-expiration-955864/id_rsa Username:docker}
	I1002 21:48:16.461492 1172202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:48:16.477552 1172202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 21:48:16.494899 1172202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:48:16.512688 1172202 provision.go:87] duration metric: took 1.254210548s to configureAuth
	I1002 21:48:16.512706 1172202 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:48:16.512889 1172202 config.go:182] Loaded profile config "cert-expiration-955864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:48:16.512993 1172202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-955864
	I1002 21:48:16.530106 1172202 main.go:141] libmachine: Using SSH client type: native
	I1002 21:48:16.530406 1172202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34171 <nil> <nil>}
	I1002 21:48:16.530419 1172202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:48:16.775524 1172202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:48:16.775536 1172202 machine.go:96] duration metric: took 4.994865995s to provisionDockerMachine
	I1002 21:48:16.775549 1172202 client.go:171] duration metric: took 10.98013266s to LocalClient.Create
	I1002 21:48:16.775561 1172202 start.go:168] duration metric: took 10.980191546s to libmachine.API.Create "cert-expiration-955864"
	I1002 21:48:16.775566 1172202 start.go:294] postStartSetup for "cert-expiration-955864" (driver="docker")
	I1002 21:48:16.775575 1172202 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:48:16.775634 1172202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:48:16.775671 1172202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-955864
	I1002 21:48:16.793156 1172202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34171 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/cert-expiration-955864/id_rsa Username:docker}
	I1002 21:48:16.894042 1172202 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:48:16.897397 1172202 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:48:16.897415 1172202 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:48:16.897424 1172202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:48:16.897479 1172202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:48:16.897560 1172202 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:48:16.897661 1172202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:48:16.905147 1172202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:48:16.922928 1172202 start.go:297] duration metric: took 147.347884ms for postStartSetup
	I1002 21:48:16.923283 1172202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-955864
	I1002 21:48:16.940138 1172202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/config.json ...
	I1002 21:48:16.940425 1172202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:48:16.940466 1172202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-955864
	I1002 21:48:16.957273 1172202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34171 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/cert-expiration-955864/id_rsa Username:docker}
	I1002 21:48:17.051235 1172202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:48:17.055891 1172202 start.go:129] duration metric: took 11.264246521s to createHost
	I1002 21:48:17.055906 1172202 start.go:84] releasing machines lock for "cert-expiration-955864", held for 11.264354999s
	I1002 21:48:17.055983 1172202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-955864
	I1002 21:48:17.072649 1172202 ssh_runner.go:195] Run: cat /version.json
	I1002 21:48:17.072695 1172202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-955864
	I1002 21:48:17.072712 1172202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:48:17.072787 1172202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-955864
	I1002 21:48:17.097851 1172202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34171 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/cert-expiration-955864/id_rsa Username:docker}
	I1002 21:48:17.114929 1172202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34171 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/cert-expiration-955864/id_rsa Username:docker}
	I1002 21:48:17.191039 1172202 ssh_runner.go:195] Run: systemctl --version
	I1002 21:48:17.285239 1172202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:48:17.321286 1172202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:48:17.325698 1172202 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:48:17.325763 1172202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:48:17.356176 1172202 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 21:48:17.356189 1172202 start.go:496] detecting cgroup driver to use...
	I1002 21:48:17.356225 1172202 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:48:17.356278 1172202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:48:17.374152 1172202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:48:17.387226 1172202 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:48:17.387281 1172202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:48:17.404634 1172202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:48:17.423160 1172202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:48:17.540744 1172202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:48:17.666286 1172202 docker.go:234] disabling docker service ...
	I1002 21:48:17.666344 1172202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:48:17.686799 1172202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:48:17.699787 1172202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:48:17.823155 1172202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:48:17.931620 1172202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:48:17.944435 1172202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:48:17.959651 1172202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:48:17.959708 1172202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:48:17.968393 1172202 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:48:17.968458 1172202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:48:17.977369 1172202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:48:17.986483 1172202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:48:17.994921 1172202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:48:18.005587 1172202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:48:18.016272 1172202 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:48:18.031248 1172202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:48:18.040939 1172202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:48:18.049756 1172202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:48:18.057761 1172202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:48:18.182555 1172202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:48:18.304665 1172202 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:48:18.304738 1172202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:48:18.308574 1172202 start.go:564] Will wait 60s for crictl version
	I1002 21:48:18.308631 1172202 ssh_runner.go:195] Run: which crictl
	I1002 21:48:18.311989 1172202 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:48:18.336602 1172202 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:48:18.336722 1172202 ssh_runner.go:195] Run: crio --version
	I1002 21:48:18.364554 1172202 ssh_runner.go:195] Run: crio --version
	I1002 21:48:18.394632 1172202 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:48:18.397338 1172202 cli_runner.go:164] Run: docker network inspect cert-expiration-955864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:48:18.412653 1172202 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 21:48:18.416525 1172202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:48:18.425693 1172202 kubeadm.go:883] updating cluster {Name:cert-expiration-955864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-955864 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:48:18.425785 1172202 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:48:18.425837 1172202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:48:18.457718 1172202 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:48:18.457730 1172202 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:48:18.457787 1172202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:48:18.486811 1172202 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:48:18.486824 1172202 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:48:18.486830 1172202 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 21:48:18.486931 1172202 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-955864 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-955864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:48:18.487010 1172202 ssh_runner.go:195] Run: crio config
	I1002 21:48:18.564570 1172202 cni.go:84] Creating CNI manager for ""
	I1002 21:48:18.564581 1172202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:48:18.564592 1172202 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:48:18.564614 1172202 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-955864 NodeName:cert-expiration-955864 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:48:18.564754 1172202 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-955864"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:48:18.564839 1172202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:48:18.572766 1172202 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:48:18.572827 1172202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:48:18.580655 1172202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1002 21:48:18.594225 1172202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:48:18.606894 1172202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1002 21:48:18.620187 1172202 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:48:18.623759 1172202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:48:18.633024 1172202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:48:18.746253 1172202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:48:18.767785 1172202 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864 for IP: 192.168.85.2
	I1002 21:48:18.767797 1172202 certs.go:195] generating shared ca certs ...
	I1002 21:48:18.767816 1172202 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:48:18.767948 1172202 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:48:18.767984 1172202 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:48:18.767992 1172202 certs.go:257] generating profile certs ...
	I1002 21:48:18.768053 1172202 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/client.key
	I1002 21:48:18.768073 1172202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/client.crt with IP's: []
	I1002 21:48:18.875024 1172202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/client.crt ...
	I1002 21:48:18.875040 1172202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/client.crt: {Name:mke89d9c514fb5988aae5adebee2144cfe476995 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:48:18.875244 1172202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/client.key ...
	I1002 21:48:18.875252 1172202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/client.key: {Name:mk26d764dbe29ab7432917e63288997d1536c357 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:48:18.875348 1172202 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/apiserver.key.5fd343f8
	I1002 21:48:18.875361 1172202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/apiserver.crt.5fd343f8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 21:48:19.506689 1172202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/apiserver.crt.5fd343f8 ...
	I1002 21:48:19.506703 1172202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/apiserver.crt.5fd343f8: {Name:mk384300081818e52ec7cf40095de027533b238c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:48:19.506902 1172202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/apiserver.key.5fd343f8 ...
	I1002 21:48:19.506910 1172202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/apiserver.key.5fd343f8: {Name:mke02b3178134e835df8b1aa18b22a1d6e9f9f42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:48:19.506978 1172202 certs.go:382] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/apiserver.crt.5fd343f8 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/apiserver.crt
	I1002 21:48:19.507049 1172202 certs.go:386] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/apiserver.key.5fd343f8 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/apiserver.key
	I1002 21:48:19.507101 1172202 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/proxy-client.key
	I1002 21:48:19.507113 1172202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/proxy-client.crt with IP's: []
	I1002 21:48:19.803995 1172202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/proxy-client.crt ...
	I1002 21:48:19.804010 1172202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/proxy-client.crt: {Name:mk5c879ab31fbee334d1d80a556a5f6de88b8e9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:48:19.804212 1172202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/proxy-client.key ...
	I1002 21:48:19.804220 1172202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/proxy-client.key: {Name:mk6365b7f7911248363aa226242d1f7fef68b284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:48:19.804408 1172202 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:48:19.804443 1172202 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:48:19.804450 1172202 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:48:19.804474 1172202 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:48:19.804496 1172202 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:48:19.804515 1172202 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:48:19.804555 1172202 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:48:19.805184 1172202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:48:19.824309 1172202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:48:19.842697 1172202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:48:19.860927 1172202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:48:19.878986 1172202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1002 21:48:19.896328 1172202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:48:19.913630 1172202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:48:19.931217 1172202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:48:19.948189 1172202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:48:19.965964 1172202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:48:19.984929 1172202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:48:20.006865 1172202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:48:20.023890 1172202 ssh_runner.go:195] Run: openssl version
	I1002 21:48:20.031235 1172202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:48:20.041181 1172202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:48:20.045445 1172202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:48:20.045517 1172202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:48:20.088086 1172202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:48:20.097401 1172202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:48:20.107653 1172202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:48:20.112334 1172202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:48:20.112398 1172202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:48:20.154298 1172202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:48:20.163431 1172202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:48:20.172498 1172202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:48:20.176511 1172202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:48:20.176566 1172202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:48:20.218104 1172202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:48:20.227010 1172202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:48:20.231941 1172202 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:48:20.232000 1172202 kubeadm.go:400] StartCluster: {Name:cert-expiration-955864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-955864 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:48:20.232094 1172202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:48:20.232162 1172202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:48:20.264035 1172202 cri.go:89] found id: ""
	I1002 21:48:20.264111 1172202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:48:20.273045 1172202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:48:20.281151 1172202 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:48:20.281219 1172202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:48:20.289450 1172202 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:48:20.289458 1172202 kubeadm.go:157] found existing configuration files:
	
	I1002 21:48:20.289532 1172202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:48:20.297790 1172202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:48:20.297852 1172202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:48:20.305183 1172202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:48:20.312878 1172202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:48:20.312938 1172202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:48:20.320636 1172202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:48:20.328480 1172202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:48:20.328556 1172202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:48:20.336042 1172202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:48:20.343963 1172202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:48:20.344034 1172202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:48:20.351969 1172202 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:48:20.395985 1172202 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:48:20.396070 1172202 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:48:20.420064 1172202 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:48:20.420136 1172202 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:48:20.420171 1172202 kubeadm.go:318] OS: Linux
	I1002 21:48:20.420231 1172202 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:48:20.420287 1172202 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:48:20.420341 1172202 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:48:20.420390 1172202 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:48:20.420453 1172202 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:48:20.420536 1172202 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:48:20.420592 1172202 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:48:20.420654 1172202 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:48:20.420701 1172202 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:48:20.490771 1172202 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:48:20.490911 1172202 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:48:20.491023 1172202 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:48:20.498089 1172202 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:48:20.504547 1172202 out.go:252]   - Generating certificates and keys ...
	I1002 21:48:20.504664 1172202 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:48:20.504747 1172202 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:48:20.827746 1172202 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:48:21.268557 1172202 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:48:21.956075 1172202 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:48:22.758102 1172202 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:48:23.436069 1172202 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:48:23.436391 1172202 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-955864 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 21:48:23.576382 1172202 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:48:23.576692 1172202 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-955864 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 21:48:24.116974 1172202 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:48:24.475037 1172202 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:48:25.021565 1172202 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:48:25.021798 1172202 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:48:26.099361 1172202 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:48:26.688401 1172202 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:48:27.677035 1172202 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:48:27.841235 1172202 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:48:27.936075 1172202 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:48:27.936977 1172202 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:48:27.939760 1172202 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:48:27.943821 1172202 out.go:252]   - Booting up control plane ...
	I1002 21:48:27.943919 1172202 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:48:27.943999 1172202 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:48:27.944077 1172202 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:48:27.960299 1172202 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:48:27.960605 1172202 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:48:27.968425 1172202 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:48:27.968803 1172202 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:48:27.968987 1172202 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:48:28.112336 1172202 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:48:28.112454 1172202 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:48:30.613293 1172202 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.501286148s
	I1002 21:48:30.616900 1172202 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:48:30.616993 1172202 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 21:48:30.617086 1172202 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:48:30.617167 1172202 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:48:34.258792 1172202 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.641271523s
	I1002 21:48:35.858011 1172202 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.241029969s
	I1002 21:48:37.618514 1172202 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.001413334s
	I1002 21:48:37.637776 1172202 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:48:37.652674 1172202 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:48:37.667799 1172202 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:48:37.668041 1172202 kubeadm.go:318] [mark-control-plane] Marking the node cert-expiration-955864 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 21:48:37.681857 1172202 kubeadm.go:318] [bootstrap-token] Using token: 9b70wz.o6luzpkf5n4w0lmv
	I1002 21:48:37.684973 1172202 out.go:252]   - Configuring RBAC rules ...
	I1002 21:48:37.685100 1172202 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:48:37.695392 1172202 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:48:37.704165 1172202 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:48:37.709928 1172202 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:48:37.713965 1172202 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:48:37.721636 1172202 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:48:38.025880 1172202 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:48:38.449385 1172202 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 21:48:39.025389 1172202 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 21:48:39.026448 1172202 kubeadm.go:318] 
	I1002 21:48:39.026513 1172202 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 21:48:39.026517 1172202 kubeadm.go:318] 
	I1002 21:48:39.026592 1172202 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 21:48:39.026596 1172202 kubeadm.go:318] 
	I1002 21:48:39.026620 1172202 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 21:48:39.026684 1172202 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:48:39.026734 1172202 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:48:39.026737 1172202 kubeadm.go:318] 
	I1002 21:48:39.026789 1172202 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 21:48:39.026792 1172202 kubeadm.go:318] 
	I1002 21:48:39.026838 1172202 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 21:48:39.026842 1172202 kubeadm.go:318] 
	I1002 21:48:39.026898 1172202 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 21:48:39.026971 1172202 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:48:39.027038 1172202 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:48:39.027041 1172202 kubeadm.go:318] 
	I1002 21:48:39.027124 1172202 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:48:39.027205 1172202 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 21:48:39.027209 1172202 kubeadm.go:318] 
	I1002 21:48:39.027291 1172202 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 9b70wz.o6luzpkf5n4w0lmv \
	I1002 21:48:39.027392 1172202 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 \
	I1002 21:48:39.027412 1172202 kubeadm.go:318] 	--control-plane 
	I1002 21:48:39.027416 1172202 kubeadm.go:318] 
	I1002 21:48:39.027498 1172202 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:48:39.027502 1172202 kubeadm.go:318] 
	I1002 21:48:39.027581 1172202 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 9b70wz.o6luzpkf5n4w0lmv \
	I1002 21:48:39.027681 1172202 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 
	I1002 21:48:39.032057 1172202 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:48:39.032298 1172202 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:48:39.032424 1172202 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:48:39.032446 1172202 cni.go:84] Creating CNI manager for ""
	I1002 21:48:39.032452 1172202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:48:39.037369 1172202 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 21:48:39.040134 1172202 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:48:39.044038 1172202 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 21:48:39.044048 1172202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 21:48:39.059113 1172202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:48:39.355353 1172202 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:48:39.355483 1172202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:48:39.355561 1172202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-955864 minikube.k8s.io/updated_at=2025_10_02T21_48_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b minikube.k8s.io/name=cert-expiration-955864 minikube.k8s.io/primary=true
	I1002 21:48:39.373111 1172202 ops.go:34] apiserver oom_adj: -16
	I1002 21:48:39.475133 1172202 kubeadm.go:1113] duration metric: took 119.699299ms to wait for elevateKubeSystemPrivileges
	I1002 21:48:39.518298 1172202 kubeadm.go:402] duration metric: took 19.286303892s to StartCluster
	I1002 21:48:39.518323 1172202 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:48:39.518390 1172202 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:48:39.519021 1172202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:48:39.519211 1172202 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:48:39.519291 1172202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 21:48:39.519503 1172202 config.go:182] Loaded profile config "cert-expiration-955864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:48:39.519530 1172202 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:48:39.519590 1172202 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-955864"
	I1002 21:48:39.519602 1172202 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-955864"
	I1002 21:48:39.519621 1172202 host.go:66] Checking if "cert-expiration-955864" exists ...
	I1002 21:48:39.520116 1172202 cli_runner.go:164] Run: docker container inspect cert-expiration-955864 --format={{.State.Status}}
	I1002 21:48:39.520315 1172202 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-955864"
	I1002 21:48:39.520327 1172202 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-955864"
	I1002 21:48:39.520572 1172202 cli_runner.go:164] Run: docker container inspect cert-expiration-955864 --format={{.State.Status}}
	I1002 21:48:39.522352 1172202 out.go:179] * Verifying Kubernetes components...
	I1002 21:48:39.525413 1172202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:48:39.558159 1172202 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:48:39.560478 1172202 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:48:39.560488 1172202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:48:39.560546 1172202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-955864
	I1002 21:48:39.562778 1172202 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-955864"
	I1002 21:48:39.562810 1172202 host.go:66] Checking if "cert-expiration-955864" exists ...
	I1002 21:48:39.563263 1172202 cli_runner.go:164] Run: docker container inspect cert-expiration-955864 --format={{.State.Status}}
	I1002 21:48:39.597567 1172202 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:48:39.597589 1172202 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:48:39.597648 1172202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-955864
	I1002 21:48:39.603098 1172202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34171 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/cert-expiration-955864/id_rsa Username:docker}
	I1002 21:48:39.628398 1172202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34171 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/cert-expiration-955864/id_rsa Username:docker}
	I1002 21:48:39.843408 1172202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 21:48:39.843503 1172202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:48:39.847861 1172202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:48:39.853358 1172202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:48:40.302559 1172202 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1002 21:48:40.304201 1172202 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:48:40.304253 1172202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:48:40.473758 1172202 api_server.go:72] duration metric: took 954.521504ms to wait for apiserver process to appear ...
	I1002 21:48:40.473769 1172202 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:48:40.473786 1172202 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:48:40.477292 1172202 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1002 21:48:40.480113 1172202 addons.go:514] duration metric: took 960.562098ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1002 21:48:40.486722 1172202 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 21:48:40.487895 1172202 api_server.go:141] control plane version: v1.34.1
	I1002 21:48:40.487921 1172202 api_server.go:131] duration metric: took 14.145676ms to wait for apiserver health ...
	I1002 21:48:40.487928 1172202 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:48:40.492629 1172202 system_pods.go:59] 5 kube-system pods found
	I1002 21:48:40.492652 1172202 system_pods.go:61] "etcd-cert-expiration-955864" [83772a13-e430-47cc-9af2-ea6b4be4a568] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:48:40.492662 1172202 system_pods.go:61] "kube-apiserver-cert-expiration-955864" [eb7333b5-2f09-400c-a499-d8d5fe7c1f15] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:48:40.492668 1172202 system_pods.go:61] "kube-controller-manager-cert-expiration-955864" [f875f04d-63d1-44f6-bba9-ba4b0b346674] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:48:40.492673 1172202 system_pods.go:61] "kube-scheduler-cert-expiration-955864" [9c06bef8-882a-44dc-9a1f-77c7bc2e78f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:48:40.492678 1172202 system_pods.go:61] "storage-provisioner" [3c0bef28-6475-4b24-9142-3a3580cad28d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 21:48:40.492688 1172202 system_pods.go:74] duration metric: took 4.753852ms to wait for pod list to return data ...
	I1002 21:48:40.492699 1172202 kubeadm.go:586] duration metric: took 973.468505ms to wait for: map[apiserver:true system_pods:true]
	I1002 21:48:40.492710 1172202 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:48:40.495237 1172202 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:48:40.495255 1172202 node_conditions.go:123] node cpu capacity is 2
	I1002 21:48:40.495265 1172202 node_conditions.go:105] duration metric: took 2.551063ms to run NodePressure ...
	I1002 21:48:40.495276 1172202 start.go:242] waiting for startup goroutines ...
	I1002 21:48:40.806393 1172202 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-955864" context rescaled to 1 replicas
	I1002 21:48:40.806422 1172202 start.go:247] waiting for cluster config update ...
	I1002 21:48:40.806432 1172202 start.go:256] writing updated cluster config ...
	I1002 21:48:40.806735 1172202 ssh_runner.go:195] Run: rm -f paused
	I1002 21:48:40.876396 1172202 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:48:40.879665 1172202 out.go:179] * Done! kubectl is now configured to use "cert-expiration-955864" cluster and "default" namespace by default
	I1002 21:49:43.270226 1167760 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000666018s
	I1002 21:49:43.270332 1167760 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000730378s
	I1002 21:49:43.273860 1167760 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000020595s
	I1002 21:49:43.273887 1167760 kubeadm.go:318] 
	I1002 21:49:43.274064 1167760 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:49:43.274584 1167760 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:49:43.274755 1167760 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:49:43.274930 1167760 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:49:43.275065 1167760 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:49:43.275220 1167760 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:49:43.275226 1167760 kubeadm.go:318] 
	I1002 21:49:43.277041 1167760 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:49:43.277303 1167760 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:49:43.277457 1167760 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:49:43.278086 1167760 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 21:49:43.278168 1167760 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:49:43.278227 1167760 kubeadm.go:402] duration metric: took 8m15.239753492s to StartCluster
	I1002 21:49:43.278265 1167760 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:49:43.278335 1167760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:49:43.303452 1167760 cri.go:89] found id: ""
	I1002 21:49:43.303487 1167760 logs.go:282] 0 containers: []
	W1002 21:49:43.303495 1167760 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:49:43.303502 1167760 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:49:43.303561 1167760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:49:43.332557 1167760 cri.go:89] found id: ""
	I1002 21:49:43.332636 1167760 logs.go:282] 0 containers: []
	W1002 21:49:43.332660 1167760 logs.go:284] No container was found matching "etcd"
	I1002 21:49:43.332679 1167760 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:49:43.332769 1167760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:49:43.359893 1167760 cri.go:89] found id: ""
	I1002 21:49:43.359923 1167760 logs.go:282] 0 containers: []
	W1002 21:49:43.359932 1167760 logs.go:284] No container was found matching "coredns"
	I1002 21:49:43.359944 1167760 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:49:43.360004 1167760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:49:43.390819 1167760 cri.go:89] found id: ""
	I1002 21:49:43.390844 1167760 logs.go:282] 0 containers: []
	W1002 21:49:43.390853 1167760 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:49:43.390860 1167760 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:49:43.390918 1167760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:49:43.416350 1167760 cri.go:89] found id: ""
	I1002 21:49:43.416376 1167760 logs.go:282] 0 containers: []
	W1002 21:49:43.416385 1167760 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:49:43.416400 1167760 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:49:43.416465 1167760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:49:43.451083 1167760 cri.go:89] found id: ""
	I1002 21:49:43.451114 1167760 logs.go:282] 0 containers: []
	W1002 21:49:43.451123 1167760 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:49:43.451159 1167760 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:49:43.451235 1167760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:49:43.481350 1167760 cri.go:89] found id: ""
	I1002 21:49:43.481381 1167760 logs.go:282] 0 containers: []
	W1002 21:49:43.481390 1167760 logs.go:284] No container was found matching "kindnet"
	I1002 21:49:43.481399 1167760 logs.go:123] Gathering logs for kubelet ...
	I1002 21:49:43.481410 1167760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:49:43.567759 1167760 logs.go:123] Gathering logs for dmesg ...
	I1002 21:49:43.567796 1167760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 21:49:43.586454 1167760 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:49:43.586480 1167760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:49:43.657452 1167760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:49:43.649649    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.650186    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.651725    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.652150    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.653340    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:49:43.649649    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.650186    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.651725    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.652150    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:43.653340    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:49:43.657478 1167760 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:49:43.657490 1167760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:49:43.734110 1167760 logs.go:123] Gathering logs for container status ...
	I1002 21:49:43.734144 1167760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 21:49:43.763948 1167760 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501822128s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000666018s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000730378s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000020595s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:49:43.764011 1167760 out.go:285] * 
	W1002 21:49:43.764097 1167760 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501822128s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000666018s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000730378s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000020595s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:49:43.764117 1167760 out.go:285] * 
	W1002 21:49:43.766672 1167760 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:49:43.772424 1167760 out.go:203] 
	W1002 21:49:43.775278 1167760 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501822128s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000666018s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000730378s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000020595s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:49:43.775317 1167760 out.go:285] * 
	I1002 21:49:43.778474 1167760 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:49:37 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:37.148863115Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:49:37 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:37.150118942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:49:37 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:37.161630243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:49:37 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:37.162556873Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:49:37 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:37.168104771Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6867aab5-daef-4985-afc0-d28b2eddae9c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:49:37 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:37.169877262Z" level=info msg="createCtr: deleting container ID e1154f221e4a9f7a2ce063fb2e26719efd8449e41332d2ccfea3fa3c0fe8f13c from idIndex" id=6867aab5-daef-4985-afc0-d28b2eddae9c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:49:37 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:37.170001402Z" level=info msg="createCtr: removing container e1154f221e4a9f7a2ce063fb2e26719efd8449e41332d2ccfea3fa3c0fe8f13c" id=6867aab5-daef-4985-afc0-d28b2eddae9c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:49:37 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:37.170154399Z" level=info msg="createCtr: deleting container e1154f221e4a9f7a2ce063fb2e26719efd8449e41332d2ccfea3fa3c0fe8f13c from storage" id=6867aab5-daef-4985-afc0-d28b2eddae9c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:49:37 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:37.17326374Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-env-916563_kube-system_ec3e40b086f0ccaf1e43dcdc7caedd5c_0" id=6867aab5-daef-4985-afc0-d28b2eddae9c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:49:37 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:37.174738524Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=632102e3-8e41-4384-97bc-90f4cfc24806 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:49:37 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:37.175860275Z" level=info msg="createCtr: deleting container ID 58908ceaa23dbc71ed8d5ed2c1847175c4db30dbf1d019348f220937bd9a3108 from idIndex" id=632102e3-8e41-4384-97bc-90f4cfc24806 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:49:37 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:37.175902875Z" level=info msg="createCtr: removing container 58908ceaa23dbc71ed8d5ed2c1847175c4db30dbf1d019348f220937bd9a3108" id=632102e3-8e41-4384-97bc-90f4cfc24806 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:49:37 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:37.17593722Z" level=info msg="createCtr: deleting container 58908ceaa23dbc71ed8d5ed2c1847175c4db30dbf1d019348f220937bd9a3108 from storage" id=632102e3-8e41-4384-97bc-90f4cfc24806 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:49:37 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:37.178098427Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-env-916563_kube-system_e89a4a708c3297b1ff582bad32a03446_0" id=632102e3-8e41-4384-97bc-90f4cfc24806 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:49:39 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:39.141733826Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=6e4d521e-7390-4f38-ae06-429941831cfc name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:49:39 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:39.142765757Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=48631b29-6936-4c46-b110-2c892a31a89d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:49:39 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:39.14377282Z" level=info msg="Creating container: kube-system/kube-apiserver-force-systemd-env-916563/kube-apiserver" id=df3cf90b-28fb-4ce1-b441-fa1f92fbfd78 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:49:39 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:39.14399322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:49:39 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:39.148631341Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:49:39 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:39.149308919Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:49:39 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:39.161465709Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=df3cf90b-28fb-4ce1-b441-fa1f92fbfd78 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:49:39 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:39.162684893Z" level=info msg="createCtr: deleting container ID d83580a49d5d35f730a60f33a5e700ab866ad9bc001ec862d1b3b5fa7923c892 from idIndex" id=df3cf90b-28fb-4ce1-b441-fa1f92fbfd78 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:49:39 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:39.162793182Z" level=info msg="createCtr: removing container d83580a49d5d35f730a60f33a5e700ab866ad9bc001ec862d1b3b5fa7923c892" id=df3cf90b-28fb-4ce1-b441-fa1f92fbfd78 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:49:39 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:39.162829981Z" level=info msg="createCtr: deleting container d83580a49d5d35f730a60f33a5e700ab866ad9bc001ec862d1b3b5fa7923c892 from storage" id=df3cf90b-28fb-4ce1-b441-fa1f92fbfd78 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:49:39 force-systemd-env-916563 crio[838]: time="2025-10-02T21:49:39.165360301Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-force-systemd-env-916563_kube-system_d49a72f2127e99af616d8e32a0c3ecdc_0" id=df3cf90b-28fb-4ce1-b441-fa1f92fbfd78 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:49:44.803177    2475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:44.803976    2475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:44.805593    2475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:44.806299    2475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:49:44.807900    2475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +3.176407] overlayfs: idmapped layers are currently not supported
	[ +43.828152] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:09] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:15] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:18] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:21] overlayfs: idmapped layers are currently not supported
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:49:44 up  6:32,  0 user,  load average: 0.54, 0.71, 1.22
	Linux force-systemd-env-916563 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:49:37 force-systemd-env-916563 kubelet[1778]:  > podSandboxID="dd0303a55e485d4189dec27aa1a1bb96ce3f297fdf8a0b4482d14dfafd24d6c2"
	Oct 02 21:49:37 force-systemd-env-916563 kubelet[1778]: E1002 21:49:37.173741    1778 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:49:37 force-systemd-env-916563 kubelet[1778]:         container etcd start failed in pod etcd-force-systemd-env-916563_kube-system(ec3e40b086f0ccaf1e43dcdc7caedd5c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:49:37 force-systemd-env-916563 kubelet[1778]:  > logger="UnhandledError"
	Oct 02 21:49:37 force-systemd-env-916563 kubelet[1778]: E1002 21:49:37.173773    1778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-env-916563" podUID="ec3e40b086f0ccaf1e43dcdc7caedd5c"
	Oct 02 21:49:37 force-systemd-env-916563 kubelet[1778]: E1002 21:49:37.179717    1778 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:49:37 force-systemd-env-916563 kubelet[1778]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:49:37 force-systemd-env-916563 kubelet[1778]:  > podSandboxID="ebcdb15d828d96fdae1f6e364952825b1bcb1bb47a75aff2bd08c0b71f79bc6c"
	Oct 02 21:49:37 force-systemd-env-916563 kubelet[1778]: E1002 21:49:37.179812    1778 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:49:37 force-systemd-env-916563 kubelet[1778]:         container kube-controller-manager start failed in pod kube-controller-manager-force-systemd-env-916563_kube-system(e89a4a708c3297b1ff582bad32a03446): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:49:37 force-systemd-env-916563 kubelet[1778]:  > logger="UnhandledError"
	Oct 02 21:49:37 force-systemd-env-916563 kubelet[1778]: E1002 21:49:37.179845    1778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-force-systemd-env-916563" podUID="e89a4a708c3297b1ff582bad32a03446"
	Oct 02 21:49:39 force-systemd-env-916563 kubelet[1778]: E1002 21:49:39.141216    1778 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-916563\" not found" node="force-systemd-env-916563"
	Oct 02 21:49:39 force-systemd-env-916563 kubelet[1778]: E1002 21:49:39.165672    1778 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:49:39 force-systemd-env-916563 kubelet[1778]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:49:39 force-systemd-env-916563 kubelet[1778]:  > podSandboxID="5a47f0b6d5dd22e02f633bdd9692e24fbc92d776375de28f7560989d55ec344a"
	Oct 02 21:49:39 force-systemd-env-916563 kubelet[1778]: E1002 21:49:39.165784    1778 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:49:39 force-systemd-env-916563 kubelet[1778]:         container kube-apiserver start failed in pod kube-apiserver-force-systemd-env-916563_kube-system(d49a72f2127e99af616d8e32a0c3ecdc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:49:39 force-systemd-env-916563 kubelet[1778]:  > logger="UnhandledError"
	Oct 02 21:49:39 force-systemd-env-916563 kubelet[1778]: E1002 21:49:39.165816    1778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-force-systemd-env-916563" podUID="d49a72f2127e99af616d8e32a0c3ecdc"
	Oct 02 21:49:39 force-systemd-env-916563 kubelet[1778]: E1002 21:49:39.752858    1778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-env-916563?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:49:39 force-systemd-env-916563 kubelet[1778]: I1002 21:49:39.941125    1778 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-env-916563"
	Oct 02 21:49:39 force-systemd-env-916563 kubelet[1778]: E1002 21:49:39.941500    1778 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.76.2:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="force-systemd-env-916563"
	Oct 02 21:49:43 force-systemd-env-916563 kubelet[1778]: E1002 21:49:43.176361    1778 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-env-916563\" not found"
	Oct 02 21:49:44 force-systemd-env-916563 kubelet[1778]: E1002 21:49:44.495287    1778 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.76.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-916563 -n force-systemd-env-916563
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-916563 -n force-systemd-env-916563: exit status 6 (464.668812ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:49:45.366702 1175115 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-916563" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-env-916563" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-916563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-916563
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-916563: (1.942814094s)
--- FAIL: TestForceSystemdEnv (513.86s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-850296 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-850296 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-850296 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-850296 --alsologtostderr -v=1] stderr:
I1002 20:54:09.454615 1024332 out.go:360] Setting OutFile to fd 1 ...
I1002 20:54:09.455911 1024332 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:54:09.455928 1024332 out.go:374] Setting ErrFile to fd 2...
I1002 20:54:09.455933 1024332 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:54:09.456208 1024332 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
I1002 20:54:09.456484 1024332 mustload.go:65] Loading cluster: functional-850296
I1002 20:54:09.456887 1024332 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:54:09.457344 1024332 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
I1002 20:54:09.480653 1024332 host.go:66] Checking if "functional-850296" exists ...
I1002 20:54:09.481007 1024332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 20:54:09.553224 1024332 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:54:09.543441819 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1002 20:54:09.553351 1024332 api_server.go:166] Checking apiserver status ...
I1002 20:54:09.553424 1024332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 20:54:09.553477 1024332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
I1002 20:54:09.570666 1024332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
I1002 20:54:09.668946 1024332 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/3920/cgroup
I1002 20:54:09.676864 1024332 api_server.go:182] apiserver freezer: "5:freezer:/docker/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc/crio/crio-275e899f5200905471afcb9d9b210a0463a726a93b579fb14dc43c0cfc487a07"
I1002 20:54:09.676944 1024332 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc/crio/crio-275e899f5200905471afcb9d9b210a0463a726a93b579fb14dc43c0cfc487a07/freezer.state
I1002 20:54:09.684013 1024332 api_server.go:204] freezer state: "THAWED"
I1002 20:54:09.684044 1024332 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1002 20:54:09.692288 1024332 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1002 20:54:09.692327 1024332 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1002 20:54:09.692511 1024332 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:54:09.692523 1024332 addons.go:69] Setting dashboard=true in profile "functional-850296"
I1002 20:54:09.692530 1024332 addons.go:238] Setting addon dashboard=true in "functional-850296"
I1002 20:54:09.692572 1024332 host.go:66] Checking if "functional-850296" exists ...
I1002 20:54:09.693008 1024332 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
I1002 20:54:09.714169 1024332 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1002 20:54:09.717145 1024332 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1002 20:54:09.720020 1024332 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1002 20:54:09.720042 1024332 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1002 20:54:09.720118 1024332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
I1002 20:54:09.737018 1024332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
I1002 20:54:09.843330 1024332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1002 20:54:09.843380 1024332 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1002 20:54:09.857892 1024332 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1002 20:54:09.857915 1024332 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1002 20:54:09.871781 1024332 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1002 20:54:09.871830 1024332 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1002 20:54:09.885346 1024332 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1002 20:54:09.885368 1024332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1002 20:54:09.898630 1024332 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1002 20:54:09.898652 1024332 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1002 20:54:09.911698 1024332 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1002 20:54:09.911720 1024332 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1002 20:54:09.924968 1024332 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1002 20:54:09.924990 1024332 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1002 20:54:09.938208 1024332 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1002 20:54:09.938252 1024332 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1002 20:54:09.951875 1024332 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1002 20:54:09.951918 1024332 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1002 20:54:09.964733 1024332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1002 20:54:10.718990 1024332 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-850296 addons enable metrics-server

                                                
                                                
I1002 20:54:10.721973 1024332 addons.go:201] Writing out "functional-850296" config to set dashboard=true...
W1002 20:54:10.722295 1024332 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1002 20:54:10.722954 1024332 kapi.go:59] client config for functional-850296: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.key", CAFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1002 20:54:10.723503 1024332 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1002 20:54:10.723521 1024332 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1002 20:54:10.723527 1024332 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1002 20:54:10.723532 1024332 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1002 20:54:10.723545 1024332 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1002 20:54:10.739129 1024332 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  5aa214b1-b6a2-49d9-88f1-4a09e443ad2e 1775 0 2025-10-02 20:54:10 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-02 20:54:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.109.243.202,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.109.243.202],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1002 20:54:10.739286 1024332 out.go:285] * Launching proxy ...
* Launching proxy ...
I1002 20:54:10.739362 1024332 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-850296 proxy --port 36195]
I1002 20:54:10.739648 1024332 dashboard.go:157] Waiting for kubectl to output host:port ...
I1002 20:54:10.797643 1024332 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1002 20:54:10.797698 1024332 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1002 20:54:10.816044 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4f206d95-4882-4092-bc21-a200db0047cc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40007d7300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004b37c0 TLS:<nil>}
I1002 20:54:10.816137 1024332 retry.go:31] will retry after 135.759µs: Temporary Error: unexpected response code: 503
I1002 20:54:10.820248 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8f8a6458-f259-43c8-a3e2-98040fc4b674] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40007d7480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004b3900 TLS:<nil>}
I1002 20:54:10.820307 1024332 retry.go:31] will retry after 215.964µs: Temporary Error: unexpected response code: 503
I1002 20:54:10.824045 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9d69880c-36a1-4dc0-8b89-8c44704d876f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40007d7540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031e780 TLS:<nil>}
I1002 20:54:10.824120 1024332 retry.go:31] will retry after 136.85µs: Temporary Error: unexpected response code: 503
I1002 20:54:10.828330 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f396ba9b-1a9c-493a-a038-90787fb0789a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40015d2080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004b3a40 TLS:<nil>}
I1002 20:54:10.828392 1024332 retry.go:31] will retry after 256.701µs: Temporary Error: unexpected response code: 503
I1002 20:54:10.832156 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cf98faf6-5213-4a2e-bccc-974409e3d47b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40007d7680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031e8c0 TLS:<nil>}
I1002 20:54:10.832227 1024332 retry.go:31] will retry after 490.233µs: Temporary Error: unexpected response code: 503
I1002 20:54:10.835788 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8b7b6a9a-a00d-474a-bdd8-572bf87c6acf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40007d7700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031eb40 TLS:<nil>}
I1002 20:54:10.835856 1024332 retry.go:31] will retry after 662.704µs: Temporary Error: unexpected response code: 503
I1002 20:54:10.840547 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[872298c9-12d5-4ab2-963e-acf96e05047d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40007d7800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031ec80 TLS:<nil>}
I1002 20:54:10.840619 1024332 retry.go:31] will retry after 855.878µs: Temporary Error: unexpected response code: 503
I1002 20:54:10.854250 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[909b2694-3a71-4c2e-af01-e20596a25dbd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40007d78c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031f040 TLS:<nil>}
I1002 20:54:10.854315 1024332 retry.go:31] will retry after 2.35488ms: Temporary Error: unexpected response code: 503
I1002 20:54:10.862733 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7effdd17-64e8-4efa-9290-d9f6e224da12] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40007d79c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031f180 TLS:<nil>}
I1002 20:54:10.862820 1024332 retry.go:31] will retry after 3.136027ms: Temporary Error: unexpected response code: 503
I1002 20:54:10.870608 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9c2ba234-ad3e-4839-afc4-8adbeff71454] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40007d7a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031f400 TLS:<nil>}
I1002 20:54:10.870706 1024332 retry.go:31] will retry after 4.935722ms: Temporary Error: unexpected response code: 503
I1002 20:54:10.878767 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[32684db9-8f4b-44eb-99e9-0af33075db52] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40015d2400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e000 TLS:<nil>}
I1002 20:54:10.878849 1024332 retry.go:31] will retry after 3.640287ms: Temporary Error: unexpected response code: 503
I1002 20:54:10.885825 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bc06e142-4d49-492b-9b86-fdaa2ff08d7b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40015d2480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e140 TLS:<nil>}
I1002 20:54:10.885893 1024332 retry.go:31] will retry after 4.664418ms: Temporary Error: unexpected response code: 503
I1002 20:54:10.894257 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e27c674b-4fc9-4ff8-841a-028dcdbbc56f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40015d2500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e280 TLS:<nil>}
I1002 20:54:10.894321 1024332 retry.go:31] will retry after 9.140028ms: Temporary Error: unexpected response code: 503
I1002 20:54:10.910422 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9ab9734f-c0dd-4cc5-960c-4a4a8c44d607] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40007d7e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031fb80 TLS:<nil>}
I1002 20:54:10.910488 1024332 retry.go:31] will retry after 10.672664ms: Temporary Error: unexpected response code: 503
I1002 20:54:10.925758 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[250f70f3-2604-456f-ab6a-304358f0e602] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40015d2600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e3c0 TLS:<nil>}
I1002 20:54:10.925819 1024332 retry.go:31] will retry after 22.862702ms: Temporary Error: unexpected response code: 503
I1002 20:54:10.952234 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8c5ea2fb-5ad6-4677-82b6-ff4e6509ed53] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:10 GMT]] Body:0x40015d2680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e500 TLS:<nil>}
I1002 20:54:10.952295 1024332 retry.go:31] will retry after 58.098463ms: Temporary Error: unexpected response code: 503
I1002 20:54:11.013504 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[967c4e4d-41f9-49ba-843d-8a24e338111b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:11 GMT]] Body:0x40015d2740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e640 TLS:<nil>}
I1002 20:54:11.013568 1024332 retry.go:31] will retry after 52.318295ms: Temporary Error: unexpected response code: 503
I1002 20:54:11.069877 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[669634b2-6f54-4150-83b0-8b1bff291e09] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:11 GMT]] Body:0x40015d2800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e780 TLS:<nil>}
I1002 20:54:11.069945 1024332 retry.go:31] will retry after 110.851799ms: Temporary Error: unexpected response code: 503
I1002 20:54:11.185028 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c2efd91b-b65e-4bf0-9d07-e47481621418] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:11 GMT]] Body:0x40015d28c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e8c0 TLS:<nil>}
I1002 20:54:11.185094 1024332 retry.go:31] will retry after 157.036257ms: Temporary Error: unexpected response code: 503
I1002 20:54:11.345327 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3b7888ce-9f29-4bc4-98e4-1fa636a693fb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:11 GMT]] Body:0x40015d2940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026ea00 TLS:<nil>}
I1002 20:54:11.345396 1024332 retry.go:31] will retry after 215.297096ms: Temporary Error: unexpected response code: 503
I1002 20:54:11.564767 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d28bbf69-97bc-4a21-be49-4e09677b5308] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:11 GMT]] Body:0x4001690240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031fcc0 TLS:<nil>}
I1002 20:54:11.564857 1024332 retry.go:31] will retry after 191.280614ms: Temporary Error: unexpected response code: 503
I1002 20:54:11.760208 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f96b8145-8edd-49c3-805e-1db1a7c46ee6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:11 GMT]] Body:0x4001690300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026eb40 TLS:<nil>}
I1002 20:54:11.760269 1024332 retry.go:31] will retry after 512.808135ms: Temporary Error: unexpected response code: 503
I1002 20:54:12.276999 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ba1871fd-c349-4b7e-b9a6-e639ca864250] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:12 GMT]] Body:0x40016903c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026ec80 TLS:<nil>}
I1002 20:54:12.277072 1024332 retry.go:31] will retry after 1.097451605s: Temporary Error: unexpected response code: 503
I1002 20:54:13.377838 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bd387162-5baa-4cbd-a157-5e7b3633f9d4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:13 GMT]] Body:0x40015d2ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026edc0 TLS:<nil>}
I1002 20:54:13.377906 1024332 retry.go:31] will retry after 1.12828322s: Temporary Error: unexpected response code: 503
I1002 20:54:14.509217 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[20b87000-bbb0-4490-828d-7e934c3eeedc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:14 GMT]] Body:0x40016904c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400031fe00 TLS:<nil>}
I1002 20:54:14.509290 1024332 retry.go:31] will retry after 2.325860193s: Temporary Error: unexpected response code: 503
I1002 20:54:16.838306 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[545f0956-2e8e-4543-bfda-3fe04160263d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:16 GMT]] Body:0x40015d2bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026ef00 TLS:<nil>}
I1002 20:54:16.838395 1024332 retry.go:31] will retry after 1.818564583s: Temporary Error: unexpected response code: 503
I1002 20:54:18.660818 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7ad383ae-60d1-4a37-975d-23511b88f3e1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:18 GMT]] Body:0x40016905c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4001722000 TLS:<nil>}
I1002 20:54:18.660886 1024332 retry.go:31] will retry after 4.096387895s: Temporary Error: unexpected response code: 503
I1002 20:54:22.762452 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f433b90b-ec6f-4a6a-8dc6-b34eb7006506] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:22 GMT]] Body:0x4001690640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4001722140 TLS:<nil>}
I1002 20:54:22.762514 1024332 retry.go:31] will retry after 7.039237345s: Temporary Error: unexpected response code: 503
I1002 20:54:29.805658 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7820c5b5-e4c5-4f19-8e59-d547410beca3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:29 GMT]] Body:0x40015d2d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4001722280 TLS:<nil>}
I1002 20:54:29.805723 1024332 retry.go:31] will retry after 5.431834056s: Temporary Error: unexpected response code: 503
I1002 20:54:35.240586 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ff305ff6-90e4-4c5a-9baf-c85a615b9e88] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:35 GMT]] Body:0x4001690740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026f040 TLS:<nil>}
I1002 20:54:35.240650 1024332 retry.go:31] will retry after 18.547817214s: Temporary Error: unexpected response code: 503
I1002 20:54:53.793736 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[94ef78c5-4c07-4925-bb16-c91c5cf73cc9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:54:53 GMT]] Body:0x40015d2e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40017223c0 TLS:<nil>}
I1002 20:54:53.793806 1024332 retry.go:31] will retry after 17.868010343s: Temporary Error: unexpected response code: 503
I1002 20:55:11.665046 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[581f17f9-361c-4de2-9385-74c718d8c6df] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:55:11 GMT]] Body:0x4001690800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4001722500 TLS:<nil>}
I1002 20:55:11.665106 1024332 retry.go:31] will retry after 26.507608544s: Temporary Error: unexpected response code: 503
I1002 20:55:38.176436 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2d6d9b4d-ca25-4036-955c-1ea285a8ca20] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:55:38 GMT]] Body:0x40016908c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026f180 TLS:<nil>}
I1002 20:55:38.176502 1024332 retry.go:31] will retry after 37.730983325s: Temporary Error: unexpected response code: 503
I1002 20:56:15.910598 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4a8e5baa-ceea-4421-bbab-3617fdd9be07] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:56:15 GMT]] Body:0x4001690100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026f2c0 TLS:<nil>}
I1002 20:56:15.910667 1024332 retry.go:31] will retry after 1m8.763120747s: Temporary Error: unexpected response code: 503
I1002 20:57:24.677133 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[88c2e164-1a54-4bfd-824c-8a13462c91d1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:57:24 GMT]] Body:0x4001690200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4001722640 TLS:<nil>}
I1002 20:57:24.677196 1024332 retry.go:31] will retry after 1m12.134449722s: Temporary Error: unexpected response code: 503
I1002 20:58:36.816969 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[803c43a5-32f1-4c7c-a76f-d4c600d42dfe] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:58:36 GMT]] Body:0x40015d20c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026f400 TLS:<nil>}
I1002 20:58:36.817078 1024332 retry.go:31] will retry after 31.537311011s: Temporary Error: unexpected response code: 503
I1002 20:59:08.360799 1024332 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[51c36d8d-ee7d-43ad-b7b8-b21e5a4b5130] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 20:59:08 GMT]] Body:0x4001690180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4001722780 TLS:<nil>}
I1002 20:59:08.360880 1024332 retry.go:31] will retry after 1m25.928043613s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-850296
helpers_test.go:243: (dbg) docker inspect functional-850296:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc",
	        "Created": "2025-10-02T20:36:51.435019192Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1013336,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:36:51.495993066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc/hostname",
	        "HostsPath": "/var/lib/docker/containers/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc/hosts",
	        "LogPath": "/var/lib/docker/containers/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc-json.log",
	        "Name": "/functional-850296",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-850296:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-850296",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc",
	                "LowerDir": "/var/lib/docker/overlay2/38ed9a4dce5ec80b8bf63eef4ac405791d407da39ca89db404ba921270c9d947-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38ed9a4dce5ec80b8bf63eef4ac405791d407da39ca89db404ba921270c9d947/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38ed9a4dce5ec80b8bf63eef4ac405791d407da39ca89db404ba921270c9d947/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38ed9a4dce5ec80b8bf63eef4ac405791d407da39ca89db404ba921270c9d947/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-850296",
	                "Source": "/var/lib/docker/volumes/functional-850296/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-850296",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-850296",
	                "name.minikube.sigs.k8s.io": "functional-850296",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb52b33547bb096a1a4f615461c35a0bcedb7dcf2cb23f80fe4ff73d51497877",
	            "SandboxKey": "/var/run/docker/netns/cb52b33547bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33914"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33912"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33913"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-850296": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d6:c1:25:47:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "56b52cdb1d427e44c48e269bed51ab58dc1dd45aa5f7a71ed9c387d2a4680ab1",
	                    "EndpointID": "4e73ea947047ef10a5fe342cfe5413df47326a143b97016ab2d446b820f6b9a8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-850296",
	                        "b3320f49b450"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-850296 -n functional-850296
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-850296 logs -n 25: (1.441714014s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-850296 image ls                                                                                                                                │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ image          │ functional-850296 image load --daemon kicbase/echo-server:functional-850296 --alsologtostderr                                                             │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ image          │ functional-850296 image ls                                                                                                                                │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ image          │ functional-850296 image save kicbase/echo-server:functional-850296 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ image          │ functional-850296 image rm kicbase/echo-server:functional-850296 --alsologtostderr                                                                        │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ image          │ functional-850296 image ls                                                                                                                                │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ image          │ functional-850296 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ image          │ functional-850296 image save --daemon kicbase/echo-server:functional-850296 --alsologtostderr                                                             │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ ssh            │ functional-850296 ssh sudo cat /etc/test/nested/copy/993954/hosts                                                                                         │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ ssh            │ functional-850296 ssh sudo cat /etc/ssl/certs/993954.pem                                                                                                  │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ ssh            │ functional-850296 ssh sudo cat /usr/share/ca-certificates/993954.pem                                                                                      │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ ssh            │ functional-850296 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ ssh            │ functional-850296 ssh sudo cat /etc/ssl/certs/9939542.pem                                                                                                 │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ ssh            │ functional-850296 ssh sudo cat /usr/share/ca-certificates/9939542.pem                                                                                     │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ ssh            │ functional-850296 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ image          │ functional-850296 image ls --format short --alsologtostderr                                                                                               │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ update-context │ functional-850296 update-context --alsologtostderr -v=2                                                                                                   │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ ssh            │ functional-850296 ssh pgrep buildkitd                                                                                                                     │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │                     │
	│ image          │ functional-850296 image build -t localhost/my-image:functional-850296 testdata/build --alsologtostderr                                                    │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ image          │ functional-850296 image ls                                                                                                                                │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ image          │ functional-850296 image ls --format yaml --alsologtostderr                                                                                                │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ image          │ functional-850296 image ls --format json --alsologtostderr                                                                                                │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ image          │ functional-850296 image ls --format table --alsologtostderr                                                                                               │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ update-context │ functional-850296 update-context --alsologtostderr -v=2                                                                                                   │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	│ update-context │ functional-850296 update-context --alsologtostderr -v=2                                                                                                   │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:55 UTC │ 02 Oct 25 20:55 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:54:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:54:09.200890 1024262 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:54:09.201075 1024262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:54:09.201104 1024262 out.go:374] Setting ErrFile to fd 2...
	I1002 20:54:09.201123 1024262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:54:09.201424 1024262 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:54:09.201815 1024262 out.go:368] Setting JSON to false
	I1002 20:54:09.202780 1024262 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20187,"bootTime":1759418263,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 20:54:09.202887 1024262 start.go:140] virtualization:  
	I1002 20:54:09.205921 1024262 out.go:179] * [functional-850296] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:54:09.209620 1024262 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:54:09.209775 1024262 notify.go:221] Checking for updates...
	I1002 20:54:09.215390 1024262 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:54:09.218271 1024262 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:54:09.221126 1024262 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 20:54:09.224014 1024262 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:54:09.226775 1024262 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:54:09.230348 1024262 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:54:09.231114 1024262 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:54:09.259029 1024262 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:54:09.259187 1024262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:54:09.318162 1024262 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:54:09.308775621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:54:09.318276 1024262 docker.go:319] overlay module found
	I1002 20:54:09.323109 1024262 out.go:179] * Using the docker driver based on existing profile
	I1002 20:54:09.325952 1024262 start.go:306] selected driver: docker
	I1002 20:54:09.325976 1024262 start.go:936] validating driver "docker" against &{Name:functional-850296 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-850296 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:09.326181 1024262 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:54:09.326290 1024262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:54:09.394799 1024262 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:54:09.384294932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:54:09.395245 1024262 cni.go:84] Creating CNI manager for ""
	I1002 20:54:09.395318 1024262 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:54:09.395368 1024262 start.go:350] cluster config:
	{Name:functional-850296 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-850296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:09.398380 1024262 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 02 20:57:58 functional-850296 crio[3524]: time="2025-10-02T20:57:58.74538081Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=c4291fc2-73a3-43b1-822a-7e54e4be75ea name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:08 functional-850296 crio[3524]: time="2025-10-02T20:58:08.955991041Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=d3cab719-b607-46fb-84ff-85bc679abf11 name=/runtime.v1.ImageService/PullImage
	Oct 02 20:58:08 functional-850296 crio[3524]: time="2025-10-02T20:58:08.95848125Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 02 20:58:09 functional-850296 crio[3524]: time="2025-10-02T20:58:09.745029738Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=a6588188-da0e-4d9e-b3aa-0d8da8086a37 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:09 functional-850296 crio[3524]: time="2025-10-02T20:58:09.745156472Z" level=info msg="Image docker.io/nginx:alpine not found" id=a6588188-da0e-4d9e-b3aa-0d8da8086a37 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:09 functional-850296 crio[3524]: time="2025-10-02T20:58:09.745194723Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=a6588188-da0e-4d9e-b3aa-0d8da8086a37 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:20 functional-850296 crio[3524]: time="2025-10-02T20:58:20.746417312Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=efbfe754-7688-4746-a4a8-040081478525 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:20 functional-850296 crio[3524]: time="2025-10-02T20:58:20.746548616Z" level=info msg="Image docker.io/nginx:alpine not found" id=efbfe754-7688-4746-a4a8-040081478525 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:20 functional-850296 crio[3524]: time="2025-10-02T20:58:20.746587253Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=efbfe754-7688-4746-a4a8-040081478525 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:22 functional-850296 crio[3524]: time="2025-10-02T20:58:22.745362092Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=4d219db5-b40a-43f1-ab1b-85cb4e9a5ac3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:22 functional-850296 crio[3524]: time="2025-10-02T20:58:22.745539113Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=4d219db5-b40a-43f1-ab1b-85cb4e9a5ac3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:22 functional-850296 crio[3524]: time="2025-10-02T20:58:22.74558858Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=4d219db5-b40a-43f1-ab1b-85cb4e9a5ac3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:33 functional-850296 crio[3524]: time="2025-10-02T20:58:33.745074976Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=c985aadb-babd-42cc-9e02-d34c1eca0f87 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:33 functional-850296 crio[3524]: time="2025-10-02T20:58:33.745211645Z" level=info msg="Image docker.io/nginx:alpine not found" id=c985aadb-babd-42cc-9e02-d34c1eca0f87 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:33 functional-850296 crio[3524]: time="2025-10-02T20:58:33.745249609Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=c985aadb-babd-42cc-9e02-d34c1eca0f87 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:37 functional-850296 crio[3524]: time="2025-10-02T20:58:37.74556361Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=d8646ab9-a2dd-499c-a3af-75cd4f1d9fb4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:37 functional-850296 crio[3524]: time="2025-10-02T20:58:37.745742215Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=d8646ab9-a2dd-499c-a3af-75cd4f1d9fb4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:37 functional-850296 crio[3524]: time="2025-10-02T20:58:37.745790484Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=d8646ab9-a2dd-499c-a3af-75cd4f1d9fb4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:46 functional-850296 crio[3524]: time="2025-10-02T20:58:46.745720893Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=710afb2e-752e-4ede-95f1-675a29d8df88 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:46 functional-850296 crio[3524]: time="2025-10-02T20:58:46.745861541Z" level=info msg="Image docker.io/nginx:alpine not found" id=710afb2e-752e-4ede-95f1-675a29d8df88 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:46 functional-850296 crio[3524]: time="2025-10-02T20:58:46.745901114Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=710afb2e-752e-4ede-95f1-675a29d8df88 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:57 functional-850296 crio[3524]: time="2025-10-02T20:58:57.745505643Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=cb6c922e-bd8a-4301-8bf5-caa07dc7cff9 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:57 functional-850296 crio[3524]: time="2025-10-02T20:58:57.745670653Z" level=info msg="Image docker.io/nginx:alpine not found" id=cb6c922e-bd8a-4301-8bf5-caa07dc7cff9 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:57 functional-850296 crio[3524]: time="2025-10-02T20:58:57.745719972Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=cb6c922e-bd8a-4301-8bf5-caa07dc7cff9 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:59:09 functional-850296 crio[3524]: time="2025-10-02T20:59:09.450516816Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	51f79106d57cd       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   5 minutes ago       Exited              mount-munger              0                   f9e809573f5d0       busybox-mount                               default
	8dfb5e0595e07       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      20 minutes ago      Running             kube-proxy                2                   881e8e9f1e876       kube-proxy-jf4r2                            kube-system
	c6de90c680ce5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      20 minutes ago      Running             kindnet-cni               2                   9863cec15bb5a       kindnet-hzdd7                               kube-system
	991a81471c245       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      20 minutes ago      Running             coredns                   2                   137e0483bc7eb       coredns-66bc5c9577-j9sfw                    kube-system
	28b49cfffc635       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      20 minutes ago      Running             storage-provisioner       2                   50523e12462aa       storage-provisioner                         kube-system
	275e899f52009       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      20 minutes ago      Running             kube-apiserver            0                   88c931ebcfb5f       kube-apiserver-functional-850296            kube-system
	27f03308c1942       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      20 minutes ago      Running             kube-controller-manager   2                   c8908d405c069       kube-controller-manager-functional-850296   kube-system
	1d131e04547ed       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      20 minutes ago      Running             kube-scheduler            2                   57e9f61b27dc9       kube-scheduler-functional-850296            kube-system
	7d406b360d906       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      20 minutes ago      Running             etcd                      2                   827db98da488f       etcd-functional-850296                      kube-system
	6d1248452ad29       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      20 minutes ago      Exited              etcd                      1                   827db98da488f       etcd-functional-850296                      kube-system
	4c2d0d935a5a3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      20 minutes ago      Exited              kube-controller-manager   1                   c8908d405c069       kube-controller-manager-functional-850296   kube-system
	57a5c63b7515c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      20 minutes ago      Exited              kube-scheduler            1                   57e9f61b27dc9       kube-scheduler-functional-850296            kube-system
	cdb96f1a50245       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      20 minutes ago      Exited              storage-provisioner       1                   50523e12462aa       storage-provisioner                         kube-system
	7878706c55ce3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      20 minutes ago      Exited              kindnet-cni               1                   9863cec15bb5a       kindnet-hzdd7                               kube-system
	c9663fe1dfee7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      20 minutes ago      Exited              coredns                   1                   137e0483bc7eb       coredns-66bc5c9577-j9sfw                    kube-system
	a795ea3c6cfd9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      20 minutes ago      Exited              kube-proxy                1                   881e8e9f1e876       kube-proxy-jf4r2                            kube-system
	
	
	==> coredns [991a81471c2453c500385f0a6c23bee980c37e0e4eee80f00f13b4914c9ba5de] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52081 - 62515 "HINFO IN 8732729395583003918.4849294333637737484. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003943884s
	
	
	==> coredns [c9663fe1dfee7d57f6b8c7bd72b81a70c5afcc4aa55c9450e671cea65a3c06e1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35798 - 45296 "HINFO IN 1292503344635988855.3549566320544195153. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013336221s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-850296
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-850296
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=functional-850296
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_37_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:37:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-850296
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:59:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:56:25 +0000   Thu, 02 Oct 2025 20:37:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:56:25 +0000   Thu, 02 Oct 2025 20:37:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:56:25 +0000   Thu, 02 Oct 2025 20:37:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:56:25 +0000   Thu, 02 Oct 2025 20:38:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-850296
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 76773bab76c446648979f56596eaecff
	  System UUID:                d0defe04-ab05-4998-9efd-4465d0254c4c
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-bqdjf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-node-connect-7d85dfc575-h8qf6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-j9sfw                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     21m
	  kube-system                 etcd-functional-850296                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         21m
	  kube-system                 kindnet-hzdd7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-apiserver-functional-850296              250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-functional-850296     200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-jf4r2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-functional-850296              100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-6m5tk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zvmhh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 20m                kube-proxy       
	  Normal   Starting                 20m                kube-proxy       
	  Warning  CgroupV1                 21m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  21m                kubelet          Node functional-850296 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m                kubelet          Node functional-850296 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m                kubelet          Node functional-850296 status is now: NodeHasSufficientPID
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           21m                node-controller  Node functional-850296 event: Registered Node functional-850296 in Controller
	  Normal   NodeReady                21m                kubelet          Node functional-850296 status is now: NodeReady
	  Normal   RegisteredNode           20m                node-controller  Node functional-850296 event: Registered Node functional-850296 in Controller
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node functional-850296 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node functional-850296 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x8 over 20m)  kubelet          Node functional-850296 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           20m                node-controller  Node functional-850296 event: Registered Node functional-850296 in Controller
	
	
	==> dmesg <==
	[Oct 2 19:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 20:17] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 20:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 20:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 20:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6d1248452ad29e3224296cf66f6505e875f1159881235dedcbe9793d3c9f615e] <==
	{"level":"warn","ts":"2025-10-02T20:38:19.523649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.531322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.555740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.585706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.601943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.623228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.672824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54592","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:38:43.609768Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T20:38:43.609814Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-850296","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T20:38:43.609901Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T20:38:43.763689Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T20:38:43.763794Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:38:43.763817Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T20:38:43.763851Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-02T20:38:43.763891Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T20:38:43.763949Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T20:38:43.763985Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T20:38:43.763993Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T20:38:43.764095Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T20:38:43.764141Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T20:38:43.764174Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:38:43.767777Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T20:38:43.767868Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:38:43.767899Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T20:38:43.767905Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-850296","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [7d406b360d906cbd403e5610b39152427208b3006f82823e3a1bc43394a91391] <==
	{"level":"warn","ts":"2025-10-02T20:39:00.966616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.980154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.000818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.014976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.033024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.058714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.108726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.121077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.143692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.156964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.173133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.198478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.223054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.242213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.258901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.322142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51620","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:48:59.449195Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":957}
	{"level":"info","ts":"2025-10-02T20:48:59.457770Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":957,"took":"8.224062ms","hash":3054057781,"current-db-size-bytes":3186688,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":3186688,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2025-10-02T20:48:59.457830Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3054057781,"revision":957,"compact-revision":-1}
	{"level":"info","ts":"2025-10-02T20:53:59.456187Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1328}
	{"level":"info","ts":"2025-10-02T20:53:59.464653Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1328,"took":"8.192512ms","hash":4200314812,"current-db-size-bytes":3186688,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":2142208,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-10-02T20:53:59.464705Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4200314812,"revision":1328,"compact-revision":957}
	{"level":"info","ts":"2025-10-02T20:58:59.465056Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1708}
	{"level":"info","ts":"2025-10-02T20:58:59.472887Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1708,"took":"7.48396ms","hash":1604495968,"current-db-size-bytes":3186688,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":2256896,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-10-02T20:58:59.472937Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1604495968,"revision":1708,"compact-revision":1328}
	
	
	==> kernel <==
	 20:59:10 up  5:41,  0 user,  load average: 0.41, 0.26, 0.61
	Linux functional-850296 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7878706c55ce34583311ef4a87456c1d6d6e903f7330e166383956d89b187d93] <==
	I1002 20:38:15.626090       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 20:38:15.642339       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 20:38:15.642579       1 main.go:148] setting mtu 1500 for CNI 
	I1002 20:38:15.642605       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 20:38:15.642619       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T20:38:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 20:38:15.823304       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 20:38:15.823379       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 20:38:15.823411       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 20:38:15.826976       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 20:38:20.728679       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 20:38:20.728784       1 metrics.go:72] Registering metrics
	I1002 20:38:20.728874       1 controller.go:711] "Syncing nftables rules"
	I1002 20:38:25.826113       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:38:25.826235       1 main.go:301] handling current node
	I1002 20:38:35.823570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:38:35.823603       1 main.go:301] handling current node
	
	
	==> kindnet [c6de90c680ce5402050e16cb4f6e81ee97109c3bb463f7e3ffae85261344e670] <==
	I1002 20:57:03.519778       1 main.go:301] handling current node
	I1002 20:57:13.521390       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:57:13.521499       1 main.go:301] handling current node
	I1002 20:57:23.521816       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:57:23.521852       1 main.go:301] handling current node
	I1002 20:57:33.519993       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:57:33.520027       1 main.go:301] handling current node
	I1002 20:57:43.519059       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:57:43.519092       1 main.go:301] handling current node
	I1002 20:57:53.519640       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:57:53.519679       1 main.go:301] handling current node
	I1002 20:58:03.519150       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:58:03.519183       1 main.go:301] handling current node
	I1002 20:58:13.519521       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:58:13.519558       1 main.go:301] handling current node
	I1002 20:58:23.519614       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:58:23.519649       1 main.go:301] handling current node
	I1002 20:58:33.522296       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:58:33.522351       1 main.go:301] handling current node
	I1002 20:58:43.519212       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:58:43.519248       1 main.go:301] handling current node
	I1002 20:58:53.519346       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:58:53.519383       1 main.go:301] handling current node
	I1002 20:59:03.520034       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:59:03.520157       1 main.go:301] handling current node
	
	
	==> kube-apiserver [275e899f5200905471afcb9d9b210a0463a726a93b579fb14dc43c0cfc487a07] <==
	I1002 20:39:02.083366       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 20:39:02.083449       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 20:39:02.100397       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 20:39:02.102318       1 policy_source.go:240] refreshing policies
	E1002 20:39:02.106557       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 20:39:02.117200       1 cache.go:39] Caches are synced for autoregister controller
	I1002 20:39:02.133480       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 20:39:02.764397       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 20:39:02.886483       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 20:39:04.310121       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 20:39:04.441156       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 20:39:04.513030       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 20:39:04.523147       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 20:39:05.581888       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 20:39:05.733846       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 20:39:05.783096       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 20:39:20.000215       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.221.12"}
	I1002 20:39:26.213086       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.136.83"}
	I1002 20:43:36.099330       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.147.57"}
	I1002 20:45:00.828056       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.9.174"}
	I1002 20:49:02.041972       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 20:54:10.392764       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 20:54:10.687760       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.243.202"}
	I1002 20:54:10.709418       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.46.176"}
	I1002 20:59:02.042408       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [27f03308c19421d82964512a8f4396955b6f0220780d0d43a730552eb475fd76] <==
	I1002 20:39:05.459325       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:39:05.460306       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 20:39:05.461653       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 20:39:05.461709       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-850296"
	I1002 20:39:05.461789       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 20:39:05.459344       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 20:39:05.460325       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 20:39:05.460562       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 20:39:05.475118       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 20:39:05.476407       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 20:39:05.478941       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 20:39:05.479016       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 20:39:05.485126       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 20:39:05.495376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:39:05.507627       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:39:05.507651       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 20:39:05.507659       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1002 20:54:10.511153       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:54:10.521926       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:54:10.522199       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:54:10.538415       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:54:10.543537       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:54:10.548977       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:54:10.556433       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 20:54:10.556629       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [4c2d0d935a5a32b4e66e39fadf67bd101aafc061bb3de8e074c2c81f0fc0f3f5] <==
	I1002 20:38:23.949252       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 20:38:23.950781       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 20:38:23.955937       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 20:38:23.963251       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:38:23.963276       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 20:38:23.963284       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 20:38:23.972900       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 20:38:23.975307       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 20:38:23.976452       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 20:38:23.976559       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 20:38:23.976601       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 20:38:23.976579       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 20:38:23.976716       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 20:38:23.976803       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 20:38:23.976567       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 20:38:23.976591       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 20:38:23.977108       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 20:38:23.977593       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-850296"
	I1002 20:38:23.977647       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 20:38:23.985638       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 20:38:23.985687       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 20:38:23.985706       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 20:38:23.985711       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 20:38:23.985717       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 20:38:23.990173       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8dfb5e0595e0720813e66577e5555d958f1259cee1c6366fa3f443e2b14c0ae1] <==
	I1002 20:39:03.240087       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:39:03.376638       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:39:03.478249       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:39:03.478356       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:39:03.478468       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:39:03.497236       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:39:03.497288       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:39:03.501346       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:39:03.501743       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:39:03.501818       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:39:03.505948       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:39:03.506161       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:39:03.506197       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:39:03.506932       1 config.go:309] "Starting node config controller"
	I1002 20:39:03.506952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:39:03.506959       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:39:03.507525       1 config.go:200] "Starting service config controller"
	I1002 20:39:03.507544       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:39:03.506028       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:39:03.609852       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:39:03.609911       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 20:39:03.611217       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a795ea3c6cfd97f06b7f92521162a0989d1af0abdd38b203d6a33e500b3e7d09] <==
	I1002 20:38:18.258892       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:38:18.975793       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:38:20.801120       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:38:20.808394       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:38:20.821259       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:38:21.048413       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:38:21.048537       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:38:21.070806       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:38:21.071210       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:38:21.071227       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:38:21.072703       1 config.go:200] "Starting service config controller"
	I1002 20:38:21.072770       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:38:21.072820       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:38:21.079579       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:38:21.079688       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 20:38:21.076958       1 config.go:309] "Starting node config controller"
	I1002 20:38:21.079781       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:38:21.079810       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:38:21.075789       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:38:21.079890       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:38:21.079926       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 20:38:21.173900       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1d131e04547edd912ef6b1b2a69a2e3c509e8bd119fdbc1e1e5e804ca19c5da5] <==
	I1002 20:39:00.118670       1 serving.go:386] Generated self-signed cert in-memory
	W1002 20:39:02.038429       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 20:39:02.038553       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 20:39:02.038590       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 20:39:02.038640       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 20:39:02.071647       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 20:39:02.074053       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:39:02.076511       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:39:02.076614       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:39:02.077076       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 20:39:02.077148       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 20:39:02.178156       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [57a5c63b7515c8ddd3572dbb6f31d3c324ab7dff05913ef9a6085cecc8fbd5ea] <==
	I1002 20:38:18.329695       1 serving.go:386] Generated self-signed cert in-memory
	W1002 20:38:20.519064       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 20:38:20.519089       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 20:38:20.519099       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 20:38:20.519118       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 20:38:20.630871       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 20:38:20.630903       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:38:20.641277       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 20:38:20.654143       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:38:20.658183       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:38:20.658269       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 20:38:20.760895       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:38:43.615409       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 20:38:43.615431       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 20:38:43.615453       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 20:38:43.615508       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:38:43.615682       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 20:38:43.615697       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 20:58:04 functional-850296 kubelet[3848]: E1002 20:58:04.745258    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	Oct 02 20:58:05 functional-850296 kubelet[3848]: E1002 20:58:05.744941    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-h8qf6" podUID="35db67a4-9151-4d30-8df4-3dd0f8212370"
	Oct 02 20:58:08 functional-850296 kubelet[3848]: E1002 20:58:08.955545    3848 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98
765c"
	Oct 02 20:58:08 functional-850296 kubelet[3848]: E1002 20:58:08.955624    3848 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 02 20:58:08 functional-850296 kubelet[3848]: E1002 20:58:08.956863    3848 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-6m5tk_kubernetes-dashboard(ff6c2b6e-c0aa-467b-9691-4eba25fbb98a): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/inc
rease-rate-limit" logger="UnhandledError"
	Oct 02 20:58:08 functional-850296 kubelet[3848]: E1002 20:58:08.956930    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-6m
5tk" podUID="ff6c2b6e-c0aa-467b-9691-4eba25fbb98a"
	Oct 02 20:58:09 functional-850296 kubelet[3848]: E1002 20:58:09.745507    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ad429109-a08a-40c0-b1c2-ee8a50977fb1"
	Oct 02 20:58:10 functional-850296 kubelet[3848]: E1002 20:58:10.745066    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bqdjf" podUID="1944770c-61a2-4381-867a-98a7fe0db025"
	Oct 02 20:58:16 functional-850296 kubelet[3848]: E1002 20:58:16.746201    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	Oct 02 20:58:18 functional-850296 kubelet[3848]: E1002 20:58:18.745173    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-h8qf6" podUID="35db67a4-9151-4d30-8df4-3dd0f8212370"
	Oct 02 20:58:20 functional-850296 kubelet[3848]: E1002 20:58:20.747335    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ad429109-a08a-40c0-b1c2-ee8a50977fb1"
	Oct 02 20:58:22 functional-850296 kubelet[3848]: E1002 20:58:22.745946    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests
: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-6m5tk" podUID="ff6c2b6e-c0aa-467b-9691-4eba25fbb98a"
	Oct 02 20:58:25 functional-850296 kubelet[3848]: E1002 20:58:25.745530    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bqdjf" podUID="1944770c-61a2-4381-867a-98a7fe0db025"
	Oct 02 20:58:28 functional-850296 kubelet[3848]: E1002 20:58:28.745186    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	Oct 02 20:58:32 functional-850296 kubelet[3848]: E1002 20:58:32.745367    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-h8qf6" podUID="35db67a4-9151-4d30-8df4-3dd0f8212370"
	Oct 02 20:58:33 functional-850296 kubelet[3848]: E1002 20:58:33.745957    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ad429109-a08a-40c0-b1c2-ee8a50977fb1"
	Oct 02 20:58:36 functional-850296 kubelet[3848]: E1002 20:58:36.746209    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bqdjf" podUID="1944770c-61a2-4381-867a-98a7fe0db025"
	Oct 02 20:58:41 functional-850296 kubelet[3848]: E1002 20:58:41.745580    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	Oct 02 20:58:45 functional-850296 kubelet[3848]: E1002 20:58:45.745150    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-h8qf6" podUID="35db67a4-9151-4d30-8df4-3dd0f8212370"
	Oct 02 20:58:46 functional-850296 kubelet[3848]: E1002 20:58:46.746331    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ad429109-a08a-40c0-b1c2-ee8a50977fb1"
	Oct 02 20:58:48 functional-850296 kubelet[3848]: E1002 20:58:48.745804    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bqdjf" podUID="1944770c-61a2-4381-867a-98a7fe0db025"
	Oct 02 20:58:52 functional-850296 kubelet[3848]: E1002 20:58:52.745609    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	Oct 02 20:58:57 functional-850296 kubelet[3848]: E1002 20:58:57.746078    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ad429109-a08a-40c0-b1c2-ee8a50977fb1"
	Oct 02 20:59:00 functional-850296 kubelet[3848]: E1002 20:59:00.745302    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bqdjf" podUID="1944770c-61a2-4381-867a-98a7fe0db025"
	Oct 02 20:59:07 functional-850296 kubelet[3848]: E1002 20:59:07.745677    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	
	
	==> storage-provisioner [28b49cfffc6351da29c7557ee872755ca084db930b14770b1ba25cf3d451dfe7] <==
	W1002 20:58:46.145961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:48.149403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:48.154644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:50.158779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:50.166734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:52.170214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:52.175128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:54.177646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:54.181966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:56.185465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:56.191984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:58.194849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:58:58.199265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:59:00.204443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:59:00.228107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:59:02.231124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:59:02.235593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:59:04.238581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:59:04.245093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:59:06.248306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:59:06.252855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:59:08.255990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:59:08.262521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:59:10.266030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:59:10.271929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cdb96f1a50245e85028f34a2bd241e7e6b08bf2bce15ce95f6bfafc37d115ecf] <==
	I1002 20:38:15.999486       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 20:38:20.874431       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 20:38:20.874559       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 20:38:20.899722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:24.365016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:28.625466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:32.223997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:35.277427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:38.300024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:38.305158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 20:38:38.305311       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 20:38:38.305677       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99de5c4e-838e-4677-b696-969817484c14", APIVersion:"v1", ResourceVersion:"530", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-850296_5e7e7349-27d6-469b-9048-22c092eda1c6 became leader
	I1002 20:38:38.305708       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-850296_5e7e7349-27d6-469b-9048-22c092eda1c6!
	W1002 20:38:38.307641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:38.316877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 20:38:38.406699       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-850296_5e7e7349-27d6-469b-9048-22c092eda1c6!
	W1002 20:38:40.319248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:40.326803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:42.337454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:42.346693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-850296 -n functional-850296
helpers_test.go:269: (dbg) Run:  kubectl --context functional-850296 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-bqdjf hello-node-connect-7d85dfc575-h8qf6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-6m5tk kubernetes-dashboard-855c9754f9-zvmhh
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-850296 describe pod busybox-mount hello-node-75c85bcc94-bqdjf hello-node-connect-7d85dfc575-h8qf6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-6m5tk kubernetes-dashboard-855c9754f9-zvmhh
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-850296 describe pod busybox-mount hello-node-75c85bcc94-bqdjf hello-node-connect-7d85dfc575-h8qf6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-6m5tk kubernetes-dashboard-855c9754f9-zvmhh: exit status 1 (124.710192ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-850296/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:53:42 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://51f79106d57cd554192d3d9a6607bf41cb1737c5a22ee893a54acdd926aa26cb
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 20:54:01 +0000
	      Finished:     Thu, 02 Oct 2025 20:54:01 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l74fz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-l74fz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m29s  default-scheduler  Successfully assigned default/busybox-mount to functional-850296
	  Normal  Pulling    5m29s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m10s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.081s (18.698s including waiting). Image size: 3774172 bytes.
	  Normal  Created    5m10s  kubelet            Created container: mount-munger
	  Normal  Started    5m10s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-bqdjf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-850296/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:45:00 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8cxcc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8cxcc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bqdjf to functional-850296
	  Normal   Pulling    7m56s (x5 over 14m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m53s (x5 over 12m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m53s (x5 over 12m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m2s (x26 over 12m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2m55s (x31 over 12m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-h8qf6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-850296/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:43:36 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fsbhd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fsbhd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  15m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-h8qf6 to functional-850296
	  Normal   Pulling    9m6s (x5 over 15m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     9m6s (x5 over 14m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     9m6s (x5 over 14m)    kubelet            Error: ErrImagePull
	  Warning  Failed     3m55s (x27 over 14m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    26s (x43 over 14m)    kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-850296/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:39:26 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8rxbd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8rxbd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  19m                   default-scheduler  Successfully assigned default/nginx-svc to functional-850296
	  Warning  Failed     18m                   kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    10m (x5 over 19m)     kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     9m26s (x5 over 18m)   kubelet            Error: ErrImagePull
	  Warning  Failed     9m26s (x4 over 16m)   kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4m37s (x26 over 18m)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m33s (x31 over 18m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-850296/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:39:32 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4br99 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-4br99:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  19m                   default-scheduler  Successfully assigned default/sp-pod to functional-850296
	  Warning  Failed     15m (x2 over 17m)     kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    9m (x5 over 19m)      kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m53s (x5 over 17m)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m53s (x3 over 12m)   kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    3m51s (x23 over 17m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m35s (x29 over 17m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-6m5tk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-zvmhh" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-850296 describe pod busybox-mount hello-node-75c85bcc94-bqdjf hello-node-connect-7d85dfc575-h8qf6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-6m5tk kubernetes-dashboard-855c9754f9-zvmhh: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-850296 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-850296 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-h8qf6" [35db67a4-9151-4d30-8df4-3dd0f8212370] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
I1002 20:43:40.589573  993954 retry.go:31] will retry after 13.055269967s: Temporary Error: Get "http:": http: no Host in request URL
I1002 20:43:53.645223  993954 retry.go:31] will retry after 20.150984898s: Temporary Error: Get "http:": http: no Host in request URL
I1002 20:44:13.797242  993954 retry.go:31] will retry after 22.227741523s: Temporary Error: Get "http:": http: no Host in request URL
I1002 20:44:36.026129  993954 retry.go:31] will retry after 23.878891799s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-850296 -n functional-850296
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-02 20:53:36.442627974 +0000 UTC m=+2132.542564529
functional_test.go:1645: (dbg) Run:  kubectl --context functional-850296 describe po hello-node-connect-7d85dfc575-h8qf6 -n default
functional_test.go:1645: (dbg) kubectl --context functional-850296 describe po hello-node-connect-7d85dfc575-h8qf6 -n default:
Name:             hello-node-connect-7d85dfc575-h8qf6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-850296/192.168.49.2
Start Time:       Thu, 02 Oct 2025 20:43:36 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fsbhd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-fsbhd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-h8qf6 to functional-850296
Normal   Pulling    3m31s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     3m31s (x5 over 8m31s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     3m31s (x5 over 8m31s)   kubelet            Error: ErrImagePull
Warning  Failed     2m10s (x16 over 8m30s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    62s (x21 over 8m30s)    kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-850296 logs hello-node-connect-7d85dfc575-h8qf6 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-850296 logs hello-node-connect-7d85dfc575-h8qf6 -n default: exit status 1 (108.926048ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-h8qf6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-850296 logs hello-node-connect-7d85dfc575-h8qf6 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-850296 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-h8qf6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-850296/192.168.49.2
Start Time:       Thu, 02 Oct 2025 20:43:36 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fsbhd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-fsbhd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-h8qf6 to functional-850296
Normal   Pulling    3m31s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     3m31s (x5 over 8m31s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     3m31s (x5 over 8m31s)   kubelet            Error: ErrImagePull
Warning  Failed     2m10s (x16 over 8m30s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    62s (x21 over 8m30s)    kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-850296 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-850296 logs -l app=hello-node-connect: exit status 1 (104.003409ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-h8qf6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-850296 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-850296 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.147.57
IPs:                      10.102.147.57
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30398/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-850296
helpers_test.go:243: (dbg) docker inspect functional-850296:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc",
	        "Created": "2025-10-02T20:36:51.435019192Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1013336,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:36:51.495993066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc/hostname",
	        "HostsPath": "/var/lib/docker/containers/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc/hosts",
	        "LogPath": "/var/lib/docker/containers/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc-json.log",
	        "Name": "/functional-850296",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-850296:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-850296",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc",
	                "LowerDir": "/var/lib/docker/overlay2/38ed9a4dce5ec80b8bf63eef4ac405791d407da39ca89db404ba921270c9d947-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38ed9a4dce5ec80b8bf63eef4ac405791d407da39ca89db404ba921270c9d947/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38ed9a4dce5ec80b8bf63eef4ac405791d407da39ca89db404ba921270c9d947/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38ed9a4dce5ec80b8bf63eef4ac405791d407da39ca89db404ba921270c9d947/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-850296",
	                "Source": "/var/lib/docker/volumes/functional-850296/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-850296",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-850296",
	                "name.minikube.sigs.k8s.io": "functional-850296",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb52b33547bb096a1a4f615461c35a0bcedb7dcf2cb23f80fe4ff73d51497877",
	            "SandboxKey": "/var/run/docker/netns/cb52b33547bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33914"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33912"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33913"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-850296": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d6:c1:25:47:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "56b52cdb1d427e44c48e269bed51ab58dc1dd45aa5f7a71ed9c387d2a4680ab1",
	                    "EndpointID": "4e73ea947047ef10a5fe342cfe5413df47326a143b97016ab2d446b820f6b9a8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-850296",
	                        "b3320f49b450"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-850296 -n functional-850296
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-850296 logs -n 25: (1.45976074s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-850296 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                   │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │ 02 Oct 25 20:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │ 02 Oct 25 20:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                       │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │ 02 Oct 25 20:38 UTC │
	│ kubectl │ functional-850296 kubectl -- --context functional-850296 get pods                                                         │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │ 02 Oct 25 20:38 UTC │
	│ start   │ -p functional-850296 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                  │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │ 02 Oct 25 20:39 UTC │
	│ service │ invalid-svc -p functional-850296                                                                                          │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ config  │ functional-850296 config unset cpus                                                                                       │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ cp      │ functional-850296 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                        │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ config  │ functional-850296 config get cpus                                                                                         │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ config  │ functional-850296 config set cpus 2                                                                                       │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ config  │ functional-850296 config get cpus                                                                                         │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ config  │ functional-850296 config unset cpus                                                                                       │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ ssh     │ functional-850296 ssh -n functional-850296 sudo cat /home/docker/cp-test.txt                                              │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ config  │ functional-850296 config get cpus                                                                                         │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ ssh     │ functional-850296 ssh echo hello                                                                                          │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ cp      │ functional-850296 cp functional-850296:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd631086801/001/cp-test.txt │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ ssh     │ functional-850296 ssh cat /etc/hostname                                                                                   │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ ssh     │ functional-850296 ssh -n functional-850296 sudo cat /home/docker/cp-test.txt                                              │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ tunnel  │ functional-850296 tunnel --alsologtostderr                                                                                │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ tunnel  │ functional-850296 tunnel --alsologtostderr                                                                                │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ cp      │ functional-850296 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ tunnel  │ functional-850296 tunnel --alsologtostderr                                                                                │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ ssh     │ functional-850296 ssh -n functional-850296 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ addons  │ functional-850296 addons list                                                                                             │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │ 02 Oct 25 20:43 UTC │
	│ addons  │ functional-850296 addons list -o json                                                                                     │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │ 02 Oct 25 20:43 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:38:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:38:42.361139 1017493 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:38:42.361285 1017493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:38:42.361289 1017493 out.go:374] Setting ErrFile to fd 2...
	I1002 20:38:42.361293 1017493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:38:42.361609 1017493 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:38:42.362026 1017493 out.go:368] Setting JSON to false
	I1002 20:38:42.363130 1017493 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19260,"bootTime":1759418263,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 20:38:42.363211 1017493 start.go:140] virtualization:  
	I1002 20:38:42.366893 1017493 out.go:179] * [functional-850296] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:38:42.370878 1017493 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:38:42.371005 1017493 notify.go:221] Checking for updates...
	I1002 20:38:42.377020 1017493 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:38:42.380139 1017493 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:38:42.382947 1017493 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 20:38:42.385838 1017493 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:38:42.388854 1017493 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:38:42.392700 1017493 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:38:42.392808 1017493 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:38:42.426202 1017493 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:38:42.426308 1017493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:38:42.483374 1017493 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 20:38:42.474408682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:38:42.483465 1017493 docker.go:319] overlay module found
	I1002 20:38:42.486484 1017493 out.go:179] * Using the docker driver based on existing profile
	I1002 20:38:42.489339 1017493 start.go:306] selected driver: docker
	I1002 20:38:42.489347 1017493 start.go:936] validating driver "docker" against &{Name:functional-850296 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-850296 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:38:42.489430 1017493 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:38:42.489567 1017493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:38:42.542151 1017493 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 20:38:42.532636676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:38:42.542575 1017493 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:38:42.542601 1017493 cni.go:84] Creating CNI manager for ""
	I1002 20:38:42.542657 1017493 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:38:42.542696 1017493 start.go:350] cluster config:
	{Name:functional-850296 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-850296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:38:42.547700 1017493 out.go:179] * Starting "functional-850296" primary control-plane node in "functional-850296" cluster
	I1002 20:38:42.550619 1017493 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:38:42.553569 1017493 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:38:42.556441 1017493 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:38:42.556486 1017493 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 20:38:42.556507 1017493 cache.go:59] Caching tarball of preloaded images
	I1002 20:38:42.556537 1017493 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:38:42.556590 1017493 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 20:38:42.556604 1017493 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:38:42.556719 1017493 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/config.json ...
	I1002 20:38:42.575706 1017493 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:38:42.575718 1017493 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:38:42.575736 1017493 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:38:42.575758 1017493 start.go:361] acquireMachinesLock for functional-850296: {Name:mk32592cb97eb8369193d1c54e8256f2b98af5f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:38:42.575823 1017493 start.go:365] duration metric: took 45.972µs to acquireMachinesLock for "functional-850296"
	I1002 20:38:42.575842 1017493 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:38:42.575847 1017493 fix.go:55] fixHost starting: 
	I1002 20:38:42.576106 1017493 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
	I1002 20:38:42.594086 1017493 fix.go:113] recreateIfNeeded on functional-850296: state=Running err=<nil>
	W1002 20:38:42.594106 1017493 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:38:42.597347 1017493 out.go:252] * Updating the running docker "functional-850296" container ...
	I1002 20:38:42.597374 1017493 machine.go:93] provisionDockerMachine start ...
	I1002 20:38:42.597465 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:42.614839 1017493 main.go:141] libmachine: Using SSH client type: native
	I1002 20:38:42.615145 1017493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33910 <nil> <nil>}
	I1002 20:38:42.615152 1017493 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:38:42.750456 1017493 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-850296
	
	I1002 20:38:42.750484 1017493 ubuntu.go:182] provisioning hostname "functional-850296"
	I1002 20:38:42.750550 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:42.770482 1017493 main.go:141] libmachine: Using SSH client type: native
	I1002 20:38:42.770789 1017493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33910 <nil> <nil>}
	I1002 20:38:42.770798 1017493 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-850296 && echo "functional-850296" | sudo tee /etc/hostname
	I1002 20:38:42.911274 1017493 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-850296
	
	I1002 20:38:42.911340 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:42.929615 1017493 main.go:141] libmachine: Using SSH client type: native
	I1002 20:38:42.929912 1017493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33910 <nil> <nil>}
	I1002 20:38:42.929927 1017493 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-850296' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-850296/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-850296' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:38:43.062557 1017493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:38:43.062572 1017493 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 20:38:43.062609 1017493 ubuntu.go:190] setting up certificates
	I1002 20:38:43.062617 1017493 provision.go:84] configureAuth start
	I1002 20:38:43.062676 1017493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-850296
	I1002 20:38:43.080957 1017493 provision.go:143] copyHostCerts
	I1002 20:38:43.081017 1017493 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 20:38:43.081033 1017493 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 20:38:43.081113 1017493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 20:38:43.081210 1017493 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 20:38:43.081213 1017493 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 20:38:43.081237 1017493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 20:38:43.081283 1017493 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 20:38:43.081286 1017493 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 20:38:43.081314 1017493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 20:38:43.081383 1017493 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.functional-850296 san=[127.0.0.1 192.168.49.2 functional-850296 localhost minikube]
	I1002 20:38:43.253493 1017493 provision.go:177] copyRemoteCerts
	I1002 20:38:43.253557 1017493 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:38:43.253600 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:43.275038 1017493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
	I1002 20:38:43.374121 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 20:38:43.392340 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:38:43.411862 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:38:43.430306 1017493 provision.go:87] duration metric: took 367.65375ms to configureAuth
	I1002 20:38:43.430324 1017493 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:38:43.430522 1017493 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:38:43.430623 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:43.450151 1017493 main.go:141] libmachine: Using SSH client type: native
	I1002 20:38:43.450492 1017493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33910 <nil> <nil>}
	I1002 20:38:43.450507 1017493 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:38:48.831847 1017493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:38:48.831860 1017493 machine.go:96] duration metric: took 6.234479924s to provisionDockerMachine
	I1002 20:38:48.831870 1017493 start.go:294] postStartSetup for "functional-850296" (driver="docker")
	I1002 20:38:48.831879 1017493 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:38:48.831948 1017493 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:38:48.831987 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:48.850548 1017493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
	I1002 20:38:48.945891 1017493 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:38:48.949234 1017493 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:38:48.949251 1017493 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:38:48.949261 1017493 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 20:38:48.949320 1017493 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 20:38:48.949394 1017493 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 20:38:48.949470 1017493 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/test/nested/copy/993954/hosts -> hosts in /etc/test/nested/copy/993954
	I1002 20:38:48.949513 1017493 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/993954
	I1002 20:38:48.956830 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 20:38:48.973888 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/test/nested/copy/993954/hosts --> /etc/test/nested/copy/993954/hosts (40 bytes)
	I1002 20:38:48.991067 1017493 start.go:297] duration metric: took 159.183681ms for postStartSetup
	I1002 20:38:48.991153 1017493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:38:48.991191 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:49.009230 1017493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
	I1002 20:38:49.103201 1017493 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:38:49.108852 1017493 fix.go:57] duration metric: took 6.532998032s for fixHost
	I1002 20:38:49.108867 1017493 start.go:84] releasing machines lock for "functional-850296", held for 6.533036833s
	I1002 20:38:49.108934 1017493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-850296
	I1002 20:38:49.126629 1017493 ssh_runner.go:195] Run: cat /version.json
	I1002 20:38:49.126675 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:49.126707 1017493 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:38:49.126757 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:49.149171 1017493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
	I1002 20:38:49.153001 1017493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
	I1002 20:38:49.241839 1017493 ssh_runner.go:195] Run: systemctl --version
	I1002 20:38:49.335144 1017493 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:38:49.370778 1017493 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:38:49.375119 1017493 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:38:49.375190 1017493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:38:49.382938 1017493 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:38:49.382951 1017493 start.go:496] detecting cgroup driver to use...
	I1002 20:38:49.382984 1017493 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:38:49.383040 1017493 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:38:49.398078 1017493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:38:49.411173 1017493 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:38:49.411230 1017493 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:38:49.426897 1017493 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:38:49.441132 1017493 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:38:49.575868 1017493 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:38:49.714615 1017493 docker.go:234] disabling docker service ...
	I1002 20:38:49.714671 1017493 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:38:49.731079 1017493 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:38:49.744151 1017493 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:38:49.902454 1017493 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:38:50.048738 1017493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:38:50.062275 1017493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:38:50.080400 1017493 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:38:50.080465 1017493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:38:50.091115 1017493 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 20:38:50.091181 1017493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:38:50.102009 1017493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:38:50.112282 1017493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:38:50.122366 1017493 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:38:50.131366 1017493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:38:50.140773 1017493 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:38:50.150486 1017493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:38:50.159801 1017493 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:38:50.167865 1017493 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:38:50.175826 1017493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:38:50.305209 1017493 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:38:50.505293 1017493 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:38:50.505355 1017493 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:38:50.509309 1017493 start.go:564] Will wait 60s for crictl version
	I1002 20:38:50.509362 1017493 ssh_runner.go:195] Run: which crictl
	I1002 20:38:50.513080 1017493 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:38:50.538463 1017493 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:38:50.538554 1017493 ssh_runner.go:195] Run: crio --version
	I1002 20:38:50.566263 1017493 ssh_runner.go:195] Run: crio --version
	I1002 20:38:50.600610 1017493 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:38:50.603732 1017493 cli_runner.go:164] Run: docker network inspect functional-850296 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:38:50.619742 1017493 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:38:50.626736 1017493 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 20:38:50.629472 1017493 kubeadm.go:883] updating cluster {Name:functional-850296 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-850296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:38:50.629598 1017493 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:38:50.629670 1017493 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:38:50.666818 1017493 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:38:50.666830 1017493 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:38:50.666884 1017493 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:38:50.693412 1017493 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:38:50.693424 1017493 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:38:50.693430 1017493 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:38:50.693521 1017493 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-850296 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-850296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:38:50.693624 1017493 ssh_runner.go:195] Run: crio config
	I1002 20:38:50.759664 1017493 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 20:38:50.759696 1017493 cni.go:84] Creating CNI manager for ""
	I1002 20:38:50.759706 1017493 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:38:50.759714 1017493 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:38:50.759736 1017493 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-850296 NodeName:functional-850296 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:38:50.759859 1017493 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-850296"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:38:50.759929 1017493 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:38:50.768165 1017493 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:38:50.768233 1017493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:38:50.775885 1017493 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:38:50.788993 1017493 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:38:50.801999 1017493 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1002 20:38:50.814950 1017493 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:38:50.819539 1017493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:38:50.953656 1017493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:38:50.968784 1017493 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296 for IP: 192.168.49.2
	I1002 20:38:50.968795 1017493 certs.go:195] generating shared ca certs ...
	I1002 20:38:50.968815 1017493 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:38:50.968981 1017493 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 20:38:50.969027 1017493 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 20:38:50.969033 1017493 certs.go:257] generating profile certs ...
	I1002 20:38:50.969138 1017493 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.key
	I1002 20:38:50.969184 1017493 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/apiserver.key.e3856381
	I1002 20:38:50.969238 1017493 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/proxy-client.key
	I1002 20:38:50.969373 1017493 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 20:38:50.969402 1017493 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 20:38:50.969409 1017493 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 20:38:50.969434 1017493 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 20:38:50.969462 1017493 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:38:50.969483 1017493 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 20:38:50.969544 1017493 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 20:38:50.970419 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:38:50.989341 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:38:51.010309 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:38:51.029172 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 20:38:51.045959 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:38:51.063264 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:38:51.081582 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:38:51.101351 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:38:51.120306 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 20:38:51.139879 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:38:51.158875 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 20:38:51.177954 1017493 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:38:51.190525 1017493 ssh_runner.go:195] Run: openssl version
	I1002 20:38:51.196837 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 20:38:51.204824 1017493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 20:38:51.208720 1017493 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 20:38:51.208779 1017493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 20:38:51.249859 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:38:51.258091 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:38:51.268417 1017493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:38:51.273197 1017493 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:38:51.273253 1017493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:38:51.323556 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:38:51.334073 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 20:38:51.342873 1017493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 20:38:51.349848 1017493 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 20:38:51.349917 1017493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 20:38:51.413700 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 20:38:51.423808 1017493 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:38:51.429889 1017493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:38:51.471943 1017493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:38:51.515762 1017493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:38:51.557180 1017493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:38:51.598791 1017493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:38:51.640333 1017493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:38:51.681546 1017493 kubeadm.go:400] StartCluster: {Name:functional-850296 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-850296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:38:51.681624 1017493 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:38:51.681701 1017493 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:38:51.711440 1017493 cri.go:89] found id: "6d1248452ad29e3224296cf66f6505e875f1159881235dedcbe9793d3c9f615e"
	I1002 20:38:51.711452 1017493 cri.go:89] found id: "4c2d0d935a5a32b4e66e39fadf67bd101aafc061bb3de8e074c2c81f0fc0f3f5"
	I1002 20:38:51.711455 1017493 cri.go:89] found id: "57a5c63b7515c8ddd3572dbb6f31d3c324ab7dff05913ef9a6085cecc8fbd5ea"
	I1002 20:38:51.711463 1017493 cri.go:89] found id: "cdb96f1a50245e85028f34a2bd241e7e6b08bf2bce15ce95f6bfafc37d115ecf"
	I1002 20:38:51.711468 1017493 cri.go:89] found id: "7878706c55ce34583311ef4a87456c1d6d6e903f7330e166383956d89b187d93"
	I1002 20:38:51.711470 1017493 cri.go:89] found id: "c9663fe1dfee7d57f6b8c7bd72b81a70c5afcc4aa55c9450e671cea65a3c06e1"
	I1002 20:38:51.711472 1017493 cri.go:89] found id: "a795ea3c6cfd97f06b7f92521162a0989d1af0abdd38b203d6a33e500b3e7d09"
	I1002 20:38:51.711474 1017493 cri.go:89] found id: "4a400288695a98aab58ee93cfd146191462b78481a90c612b491e094f0d7b00c"
	I1002 20:38:51.711476 1017493 cri.go:89] found id: ""
	I1002 20:38:51.711528 1017493 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 20:38:51.723749 1017493 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:38:51Z" level=error msg="open /run/runc: no such file or directory"
	I1002 20:38:51.723855 1017493 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:38:51.732599 1017493 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:38:51.732609 1017493 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:38:51.732665 1017493 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:38:51.740815 1017493 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:38:51.741331 1017493 kubeconfig.go:125] found "functional-850296" server: "https://192.168.49.2:8441"
	I1002 20:38:51.742700 1017493 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:38:51.750496 1017493 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 20:37:01.216112370 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 20:38:50.809349143 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 20:38:51.750506 1017493 kubeadm.go:1160] stopping kube-system containers ...
	I1002 20:38:51.750525 1017493 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 20:38:51.750583 1017493 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:38:51.778347 1017493 cri.go:89] found id: "6d1248452ad29e3224296cf66f6505e875f1159881235dedcbe9793d3c9f615e"
	I1002 20:38:51.778359 1017493 cri.go:89] found id: "4c2d0d935a5a32b4e66e39fadf67bd101aafc061bb3de8e074c2c81f0fc0f3f5"
	I1002 20:38:51.778363 1017493 cri.go:89] found id: "57a5c63b7515c8ddd3572dbb6f31d3c324ab7dff05913ef9a6085cecc8fbd5ea"
	I1002 20:38:51.778365 1017493 cri.go:89] found id: "cdb96f1a50245e85028f34a2bd241e7e6b08bf2bce15ce95f6bfafc37d115ecf"
	I1002 20:38:51.778368 1017493 cri.go:89] found id: "7878706c55ce34583311ef4a87456c1d6d6e903f7330e166383956d89b187d93"
	I1002 20:38:51.778371 1017493 cri.go:89] found id: "c9663fe1dfee7d57f6b8c7bd72b81a70c5afcc4aa55c9450e671cea65a3c06e1"
	I1002 20:38:51.778373 1017493 cri.go:89] found id: "a795ea3c6cfd97f06b7f92521162a0989d1af0abdd38b203d6a33e500b3e7d09"
	I1002 20:38:51.778376 1017493 cri.go:89] found id: "4a400288695a98aab58ee93cfd146191462b78481a90c612b491e094f0d7b00c"
	I1002 20:38:51.778378 1017493 cri.go:89] found id: ""
	I1002 20:38:51.778383 1017493 cri.go:252] Stopping containers: [6d1248452ad29e3224296cf66f6505e875f1159881235dedcbe9793d3c9f615e 4c2d0d935a5a32b4e66e39fadf67bd101aafc061bb3de8e074c2c81f0fc0f3f5 57a5c63b7515c8ddd3572dbb6f31d3c324ab7dff05913ef9a6085cecc8fbd5ea cdb96f1a50245e85028f34a2bd241e7e6b08bf2bce15ce95f6bfafc37d115ecf 7878706c55ce34583311ef4a87456c1d6d6e903f7330e166383956d89b187d93 c9663fe1dfee7d57f6b8c7bd72b81a70c5afcc4aa55c9450e671cea65a3c06e1 a795ea3c6cfd97f06b7f92521162a0989d1af0abdd38b203d6a33e500b3e7d09 4a400288695a98aab58ee93cfd146191462b78481a90c612b491e094f0d7b00c]
	I1002 20:38:51.778439 1017493 ssh_runner.go:195] Run: which crictl
	I1002 20:38:51.782308 1017493 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 6d1248452ad29e3224296cf66f6505e875f1159881235dedcbe9793d3c9f615e 4c2d0d935a5a32b4e66e39fadf67bd101aafc061bb3de8e074c2c81f0fc0f3f5 57a5c63b7515c8ddd3572dbb6f31d3c324ab7dff05913ef9a6085cecc8fbd5ea cdb96f1a50245e85028f34a2bd241e7e6b08bf2bce15ce95f6bfafc37d115ecf 7878706c55ce34583311ef4a87456c1d6d6e903f7330e166383956d89b187d93 c9663fe1dfee7d57f6b8c7bd72b81a70c5afcc4aa55c9450e671cea65a3c06e1 a795ea3c6cfd97f06b7f92521162a0989d1af0abdd38b203d6a33e500b3e7d09 4a400288695a98aab58ee93cfd146191462b78481a90c612b491e094f0d7b00c
	I1002 20:38:51.853650 1017493 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 20:38:51.977243 1017493 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:38:51.985605 1017493 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 20:37 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  2 20:37 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  2 20:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  2 20:37 /etc/kubernetes/scheduler.conf
	
	I1002 20:38:51.985689 1017493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:38:51.994394 1017493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:38:52.008229 1017493 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:38:52.008294 1017493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:38:52.016389 1017493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:38:52.024460 1017493 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:38:52.024520 1017493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:38:52.032490 1017493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:38:52.040551 1017493 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:38:52.040614 1017493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:38:52.048422 1017493 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:38:52.057120 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:38:52.109514 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:38:56.364753 1017493 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (4.255213164s)
	I1002 20:38:56.364813 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:38:56.595336 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:38:56.666839 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:38:56.723308 1017493 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:38:56.723375 1017493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:38:57.224304 1017493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:38:57.724442 1017493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:38:57.742432 1017493 api_server.go:72] duration metric: took 1.01913568s to wait for apiserver process to appear ...
	I1002 20:38:57.742446 1017493 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:38:57.742464 1017493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:39:01.944066 1017493 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 20:39:01.944084 1017493 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 20:39:01.944097 1017493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:39:01.959670 1017493 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 20:39:01.959687 1017493 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 20:39:02.243131 1017493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:39:02.252011 1017493 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:39:02.252053 1017493 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:39:02.742591 1017493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:39:02.764212 1017493 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:39:02.764235 1017493 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:39:03.242738 1017493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:39:03.252790 1017493 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:39:03.252808 1017493 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:39:03.743157 1017493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:39:03.753263 1017493 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 20:39:03.769068 1017493 api_server.go:141] control plane version: v1.34.1
	I1002 20:39:03.769084 1017493 api_server.go:131] duration metric: took 6.026632462s to wait for apiserver health ...
	I1002 20:39:03.769092 1017493 cni.go:84] Creating CNI manager for ""
	I1002 20:39:03.769098 1017493 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:39:03.772711 1017493 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 20:39:03.775809 1017493 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 20:39:03.780955 1017493 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 20:39:03.780966 1017493 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 20:39:03.795200 1017493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 20:39:04.317409 1017493 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:39:04.320784 1017493 system_pods.go:59] 8 kube-system pods found
	I1002 20:39:04.320811 1017493 system_pods.go:61] "coredns-66bc5c9577-j9sfw" [22774fdc-2372-4f6c-a311-f87f3c1da6b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:39:04.320821 1017493 system_pods.go:61] "etcd-functional-850296" [2deb9442-175c-4270-a03a-3fe07c68d042] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:39:04.320826 1017493 system_pods.go:61] "kindnet-hzdd7" [d54e1d14-95ee-4a95-93eb-9cb09d7fa4c7] Running
	I1002 20:39:04.320834 1017493 system_pods.go:61] "kube-apiserver-functional-850296" [27680d93-7739-4544-ad2c-7916686d9754] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:39:04.320839 1017493 system_pods.go:61] "kube-controller-manager-functional-850296" [c60690e8-4407-4308-8cad-74366a17a41c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:39:04.320844 1017493 system_pods.go:61] "kube-proxy-jf4r2" [597f79f9-4539-419c-bc29-29bd6f37112f] Running
	I1002 20:39:04.320849 1017493 system_pods.go:61] "kube-scheduler-functional-850296" [e8e589ab-dd32-46d2-8c98-8b4759d57a5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:39:04.320853 1017493 system_pods.go:61] "storage-provisioner" [4923d052-4ebe-4955-9e43-6337406d02fa] Running
	I1002 20:39:04.320859 1017493 system_pods.go:74] duration metric: took 3.43947ms to wait for pod list to return data ...
	I1002 20:39:04.320865 1017493 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:39:04.323907 1017493 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 20:39:04.323927 1017493 node_conditions.go:123] node cpu capacity is 2
	I1002 20:39:04.323937 1017493 node_conditions.go:105] duration metric: took 3.068271ms to run NodePressure ...
	I1002 20:39:04.323997 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:39:04.584087 1017493 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1002 20:39:04.587452 1017493 kubeadm.go:743] kubelet initialised
	I1002 20:39:04.587463 1017493 kubeadm.go:744] duration metric: took 3.362533ms waiting for restarted kubelet to initialise ...
	I1002 20:39:04.587476 1017493 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:39:04.596157 1017493 ops.go:34] apiserver oom_adj: -16
	I1002 20:39:04.596168 1017493 kubeadm.go:601] duration metric: took 12.863554s to restartPrimaryControlPlane
	I1002 20:39:04.596175 1017493 kubeadm.go:402] duration metric: took 12.914647323s to StartCluster
	I1002 20:39:04.596189 1017493 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:39:04.596249 1017493 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:39:04.596931 1017493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:39:04.597203 1017493 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:39:04.597448 1017493 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:39:04.597478 1017493 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:39:04.597533 1017493 addons.go:69] Setting storage-provisioner=true in profile "functional-850296"
	I1002 20:39:04.597556 1017493 addons.go:238] Setting addon storage-provisioner=true in "functional-850296"
	W1002 20:39:04.597561 1017493 addons.go:247] addon storage-provisioner should already be in state true
	I1002 20:39:04.597580 1017493 host.go:66] Checking if "functional-850296" exists ...
	I1002 20:39:04.597613 1017493 addons.go:69] Setting default-storageclass=true in profile "functional-850296"
	I1002 20:39:04.597628 1017493 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-850296"
	I1002 20:39:04.597925 1017493 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
	I1002 20:39:04.597984 1017493 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
	I1002 20:39:04.601824 1017493 out.go:179] * Verifying Kubernetes components...
	I1002 20:39:04.609249 1017493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:39:04.625293 1017493 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:39:04.627276 1017493 addons.go:238] Setting addon default-storageclass=true in "functional-850296"
	W1002 20:39:04.627286 1017493 addons.go:247] addon default-storageclass should already be in state true
	I1002 20:39:04.627354 1017493 host.go:66] Checking if "functional-850296" exists ...
	I1002 20:39:04.627780 1017493 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
	I1002 20:39:04.628347 1017493 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:39:04.628356 1017493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:39:04.628407 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:39:04.662409 1017493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
	I1002 20:39:04.672335 1017493 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:39:04.672347 1017493 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:39:04.672412 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:39:04.700986 1017493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
	I1002 20:39:04.799296 1017493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:39:04.842722 1017493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:39:04.854352 1017493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:39:05.672180 1017493 node_ready.go:35] waiting up to 6m0s for node "functional-850296" to be "Ready" ...
	I1002 20:39:05.678026 1017493 node_ready.go:49] node "functional-850296" is "Ready"
	I1002 20:39:05.678061 1017493 node_ready.go:38] duration metric: took 5.864024ms for node "functional-850296" to be "Ready" ...
	I1002 20:39:05.678100 1017493 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:39:05.678167 1017493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:39:05.686117 1017493 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 20:39:05.689020 1017493 addons.go:514] duration metric: took 1.091522007s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 20:39:05.691809 1017493 api_server.go:72] duration metric: took 1.094583173s to wait for apiserver process to appear ...
	I1002 20:39:05.691819 1017493 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:39:05.691836 1017493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:39:05.701339 1017493 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 20:39:05.702409 1017493 api_server.go:141] control plane version: v1.34.1
	I1002 20:39:05.702421 1017493 api_server.go:131] duration metric: took 10.596048ms to wait for apiserver health ...
	I1002 20:39:05.702428 1017493 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:39:05.706004 1017493 system_pods.go:59] 8 kube-system pods found
	I1002 20:39:05.706021 1017493 system_pods.go:61] "coredns-66bc5c9577-j9sfw" [22774fdc-2372-4f6c-a311-f87f3c1da6b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:39:05.706028 1017493 system_pods.go:61] "etcd-functional-850296" [2deb9442-175c-4270-a03a-3fe07c68d042] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:39:05.706065 1017493 system_pods.go:61] "kindnet-hzdd7" [d54e1d14-95ee-4a95-93eb-9cb09d7fa4c7] Running
	I1002 20:39:05.706078 1017493 system_pods.go:61] "kube-apiserver-functional-850296" [27680d93-7739-4544-ad2c-7916686d9754] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:39:05.706088 1017493 system_pods.go:61] "kube-controller-manager-functional-850296" [c60690e8-4407-4308-8cad-74366a17a41c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:39:05.706092 1017493 system_pods.go:61] "kube-proxy-jf4r2" [597f79f9-4539-419c-bc29-29bd6f37112f] Running
	I1002 20:39:05.706097 1017493 system_pods.go:61] "kube-scheduler-functional-850296" [e8e589ab-dd32-46d2-8c98-8b4759d57a5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:39:05.706100 1017493 system_pods.go:61] "storage-provisioner" [4923d052-4ebe-4955-9e43-6337406d02fa] Running
	I1002 20:39:05.706104 1017493 system_pods.go:74] duration metric: took 3.671735ms to wait for pod list to return data ...
	I1002 20:39:05.706110 1017493 default_sa.go:34] waiting for default service account to be created ...
	I1002 20:39:05.708636 1017493 default_sa.go:45] found service account: "default"
	I1002 20:39:05.708647 1017493 default_sa.go:55] duration metric: took 2.532638ms for default service account to be created ...
	I1002 20:39:05.708653 1017493 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 20:39:05.711630 1017493 system_pods.go:86] 8 kube-system pods found
	I1002 20:39:05.711648 1017493 system_pods.go:89] "coredns-66bc5c9577-j9sfw" [22774fdc-2372-4f6c-a311-f87f3c1da6b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:39:05.711655 1017493 system_pods.go:89] "etcd-functional-850296" [2deb9442-175c-4270-a03a-3fe07c68d042] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:39:05.711660 1017493 system_pods.go:89] "kindnet-hzdd7" [d54e1d14-95ee-4a95-93eb-9cb09d7fa4c7] Running
	I1002 20:39:05.711667 1017493 system_pods.go:89] "kube-apiserver-functional-850296" [27680d93-7739-4544-ad2c-7916686d9754] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:39:05.711673 1017493 system_pods.go:89] "kube-controller-manager-functional-850296" [c60690e8-4407-4308-8cad-74366a17a41c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:39:05.711676 1017493 system_pods.go:89] "kube-proxy-jf4r2" [597f79f9-4539-419c-bc29-29bd6f37112f] Running
	I1002 20:39:05.711682 1017493 system_pods.go:89] "kube-scheduler-functional-850296" [e8e589ab-dd32-46d2-8c98-8b4759d57a5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:39:05.711685 1017493 system_pods.go:89] "storage-provisioner" [4923d052-4ebe-4955-9e43-6337406d02fa] Running
	I1002 20:39:05.711690 1017493 system_pods.go:126] duration metric: took 3.033129ms to wait for k8s-apps to be running ...
	I1002 20:39:05.711696 1017493 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:39:05.711752 1017493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:39:05.725920 1017493 system_svc.go:56] duration metric: took 14.214114ms WaitForService to wait for kubelet
	I1002 20:39:05.725937 1017493 kubeadm.go:586] duration metric: took 1.12871452s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:39:05.725953 1017493 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:39:05.735506 1017493 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 20:39:05.735522 1017493 node_conditions.go:123] node cpu capacity is 2
	I1002 20:39:05.735532 1017493 node_conditions.go:105] duration metric: took 9.575037ms to run NodePressure ...
	I1002 20:39:05.735579 1017493 start.go:242] waiting for startup goroutines ...
	I1002 20:39:05.735587 1017493 start.go:247] waiting for cluster config update ...
	I1002 20:39:05.735597 1017493 start.go:256] writing updated cluster config ...
	I1002 20:39:05.735875 1017493 ssh_runner.go:195] Run: rm -f paused
	I1002 20:39:05.740011 1017493 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:39:05.805703 1017493 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j9sfw" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 20:39:07.811503 1017493 pod_ready.go:104] pod "coredns-66bc5c9577-j9sfw" is not "Ready", error: <nil>
	W1002 20:39:10.311824 1017493 pod_ready.go:104] pod "coredns-66bc5c9577-j9sfw" is not "Ready", error: <nil>
	W1002 20:39:12.313123 1017493 pod_ready.go:104] pod "coredns-66bc5c9577-j9sfw" is not "Ready", error: <nil>
	I1002 20:39:12.819279 1017493 pod_ready.go:94] pod "coredns-66bc5c9577-j9sfw" is "Ready"
	I1002 20:39:12.819294 1017493 pod_ready.go:86] duration metric: took 7.013577062s for pod "coredns-66bc5c9577-j9sfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:12.826608 1017493 pod_ready.go:83] waiting for pod "etcd-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:12.839600 1017493 pod_ready.go:94] pod "etcd-functional-850296" is "Ready"
	I1002 20:39:12.839623 1017493 pod_ready.go:86] duration metric: took 12.999729ms for pod "etcd-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:12.852169 1017493 pod_ready.go:83] waiting for pod "kube-apiserver-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:13.857442 1017493 pod_ready.go:94] pod "kube-apiserver-functional-850296" is "Ready"
	I1002 20:39:13.857456 1017493 pod_ready.go:86] duration metric: took 1.005274006s for pod "kube-apiserver-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:13.859670 1017493 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:13.864111 1017493 pod_ready.go:94] pod "kube-controller-manager-functional-850296" is "Ready"
	I1002 20:39:13.864124 1017493 pod_ready.go:86] duration metric: took 4.443014ms for pod "kube-controller-manager-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:14.009109 1017493 pod_ready.go:83] waiting for pod "kube-proxy-jf4r2" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:14.409456 1017493 pod_ready.go:94] pod "kube-proxy-jf4r2" is "Ready"
	I1002 20:39:14.409470 1017493 pod_ready.go:86] duration metric: took 400.346955ms for pod "kube-proxy-jf4r2" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:14.609727 1017493 pod_ready.go:83] waiting for pod "kube-scheduler-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:16.615190 1017493 pod_ready.go:94] pod "kube-scheduler-functional-850296" is "Ready"
	I1002 20:39:16.615203 1017493 pod_ready.go:86] duration metric: took 2.005463904s for pod "kube-scheduler-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:16.615217 1017493 pod_ready.go:40] duration metric: took 10.875180818s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:39:16.672078 1017493 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 20:39:16.677291 1017493 out.go:179] * Done! kubectl is now configured to use "functional-850296" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 20:51:12 functional-850296 crio[3524]: time="2025-10-02T20:51:12.74543656Z" level=info msg="Image docker.io/nginx:alpine not found" id=e99b646b-6a35-4b3d-bbd1-4f6eacd27932 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:51:12 functional-850296 crio[3524]: time="2025-10-02T20:51:12.745473179Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=e99b646b-6a35-4b3d-bbd1-4f6eacd27932 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:51:18 functional-850296 crio[3524]: time="2025-10-02T20:51:18.561088815Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3640e0e0-bae2-4eeb-a39d-7cbba079b905 name=/runtime.v1.ImageService/PullImage
	Oct 02 20:51:26 functional-850296 crio[3524]: time="2025-10-02T20:51:26.746777174Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=5153bb86-701d-4c9b-9afe-c7c53084a9f5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:51:26 functional-850296 crio[3524]: time="2025-10-02T20:51:26.74691436Z" level=info msg="Image docker.io/nginx:alpine not found" id=5153bb86-701d-4c9b-9afe-c7c53084a9f5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:51:26 functional-850296 crio[3524]: time="2025-10-02T20:51:26.746962744Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=5153bb86-701d-4c9b-9afe-c7c53084a9f5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:51:37 functional-850296 crio[3524]: time="2025-10-02T20:51:37.745872619Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=5fe7aad8-986f-4a8d-bc9f-63ee0556a6e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:51:37 functional-850296 crio[3524]: time="2025-10-02T20:51:37.746012775Z" level=info msg="Image docker.io/nginx:alpine not found" id=5fe7aad8-986f-4a8d-bc9f-63ee0556a6e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:51:37 functional-850296 crio[3524]: time="2025-10-02T20:51:37.746089622Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=5fe7aad8-986f-4a8d-bc9f-63ee0556a6e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:51:49 functional-850296 crio[3524]: time="2025-10-02T20:51:49.745583185Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=8e6d3347-23fc-4430-bb9c-8f998acd6942 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:51:49 functional-850296 crio[3524]: time="2025-10-02T20:51:49.745719608Z" level=info msg="Image docker.io/nginx:alpine not found" id=8e6d3347-23fc-4430-bb9c-8f998acd6942 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:51:49 functional-850296 crio[3524]: time="2025-10-02T20:51:49.745757547Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=8e6d3347-23fc-4430-bb9c-8f998acd6942 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:04 functional-850296 crio[3524]: time="2025-10-02T20:52:04.745614497Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=aec241aa-1336-4dc6-86c2-39378297f542 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:04 functional-850296 crio[3524]: time="2025-10-02T20:52:04.745740434Z" level=info msg="Image docker.io/nginx:alpine not found" id=aec241aa-1336-4dc6-86c2-39378297f542 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:04 functional-850296 crio[3524]: time="2025-10-02T20:52:04.745777644Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=aec241aa-1336-4dc6-86c2-39378297f542 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:15 functional-850296 crio[3524]: time="2025-10-02T20:52:15.745435469Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b828c2ab-0ab1-4d6c-a3d6-a01a4559c24a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:15 functional-850296 crio[3524]: time="2025-10-02T20:52:15.745589376Z" level=info msg="Image docker.io/nginx:alpine not found" id=b828c2ab-0ab1-4d6c-a3d6-a01a4559c24a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:15 functional-850296 crio[3524]: time="2025-10-02T20:52:15.74563045Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=b828c2ab-0ab1-4d6c-a3d6-a01a4559c24a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:28 functional-850296 crio[3524]: time="2025-10-02T20:52:28.746008903Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b4893e41-6aac-4928-856c-043988e6b47a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:28 functional-850296 crio[3524]: time="2025-10-02T20:52:28.746190224Z" level=info msg="Image docker.io/nginx:alpine not found" id=b4893e41-6aac-4928-856c-043988e6b47a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:28 functional-850296 crio[3524]: time="2025-10-02T20:52:28.746239552Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=b4893e41-6aac-4928-856c-043988e6b47a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:28 functional-850296 crio[3524]: time="2025-10-02T20:52:28.74714965Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=deed6076-87fd-4981-be35-0da3303a36bc name=/runtime.v1.ImageService/PullImage
	Oct 02 20:52:28 functional-850296 crio[3524]: time="2025-10-02T20:52:28.75006553Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 02 20:52:59 functional-850296 crio[3524]: time="2025-10-02T20:52:59.022518079Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 02 20:53:29 functional-850296 crio[3524]: time="2025-10-02T20:53:29.342334322Z" level=info msg="Trying to access \"docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8dfb5e0595e07       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   14 minutes ago      Running             kube-proxy                2                   881e8e9f1e876       kube-proxy-jf4r2                            kube-system
	c6de90c680ce5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   14 minutes ago      Running             kindnet-cni               2                   9863cec15bb5a       kindnet-hzdd7                               kube-system
	991a81471c245       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   14 minutes ago      Running             coredns                   2                   137e0483bc7eb       coredns-66bc5c9577-j9sfw                    kube-system
	28b49cfffc635       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   14 minutes ago      Running             storage-provisioner       2                   50523e12462aa       storage-provisioner                         kube-system
	275e899f52009       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 minutes ago      Running             kube-apiserver            0                   88c931ebcfb5f       kube-apiserver-functional-850296            kube-system
	27f03308c1942       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 minutes ago      Running             kube-controller-manager   2                   c8908d405c069       kube-controller-manager-functional-850296   kube-system
	1d131e04547ed       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 minutes ago      Running             kube-scheduler            2                   57e9f61b27dc9       kube-scheduler-functional-850296            kube-system
	7d406b360d906       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 minutes ago      Running             etcd                      2                   827db98da488f       etcd-functional-850296                      kube-system
	6d1248452ad29       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 minutes ago      Exited              etcd                      1                   827db98da488f       etcd-functional-850296                      kube-system
	4c2d0d935a5a3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 minutes ago      Exited              kube-controller-manager   1                   c8908d405c069       kube-controller-manager-functional-850296   kube-system
	57a5c63b7515c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 minutes ago      Exited              kube-scheduler            1                   57e9f61b27dc9       kube-scheduler-functional-850296            kube-system
	cdb96f1a50245       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   15 minutes ago      Exited              storage-provisioner       1                   50523e12462aa       storage-provisioner                         kube-system
	7878706c55ce3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   15 minutes ago      Exited              kindnet-cni               1                   9863cec15bb5a       kindnet-hzdd7                               kube-system
	c9663fe1dfee7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   15 minutes ago      Exited              coredns                   1                   137e0483bc7eb       coredns-66bc5c9577-j9sfw                    kube-system
	a795ea3c6cfd9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   15 minutes ago      Exited              kube-proxy                1                   881e8e9f1e876       kube-proxy-jf4r2                            kube-system
	
	
	==> coredns [991a81471c2453c500385f0a6c23bee980c37e0e4eee80f00f13b4914c9ba5de] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52081 - 62515 "HINFO IN 8732729395583003918.4849294333637737484. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003943884s
	
	
	==> coredns [c9663fe1dfee7d57f6b8c7bd72b81a70c5afcc4aa55c9450e671cea65a3c06e1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35798 - 45296 "HINFO IN 1292503344635988855.3549566320544195153. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013336221s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-850296
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-850296
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=functional-850296
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_37_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:37:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-850296
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:53:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:51:48 +0000   Thu, 02 Oct 2025 20:37:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:51:48 +0000   Thu, 02 Oct 2025 20:37:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:51:48 +0000   Thu, 02 Oct 2025 20:37:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:51:48 +0000   Thu, 02 Oct 2025 20:38:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-850296
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 76773bab76c446648979f56596eaecff
	  System UUID:                d0defe04-ab05-4998-9efd-4465d0254c4c
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-bqdjf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  default                     hello-node-connect-7d85dfc575-h8qf6          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-j9sfw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 etcd-functional-850296                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-hzdd7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-functional-850296             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-functional-850296    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-jf4r2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-functional-850296             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 14m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Warning  CgroupV1                 16m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16m                kubelet          Node functional-850296 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                kubelet          Node functional-850296 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                kubelet          Node functional-850296 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           16m                node-controller  Node functional-850296 event: Registered Node functional-850296 in Controller
	  Normal   NodeReady                15m                kubelet          Node functional-850296 status is now: NodeReady
	  Normal   RegisteredNode           15m                node-controller  Node functional-850296 event: Registered Node functional-850296 in Controller
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node functional-850296 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 14m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node functional-850296 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)  kubelet          Node functional-850296 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                node-controller  Node functional-850296 event: Registered Node functional-850296 in Controller
	
	
	==> dmesg <==
	[Oct 2 19:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 20:17] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 20:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 20:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 20:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6d1248452ad29e3224296cf66f6505e875f1159881235dedcbe9793d3c9f615e] <==
	{"level":"warn","ts":"2025-10-02T20:38:19.523649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.531322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.555740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.585706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.601943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.623228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.672824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54592","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:38:43.609768Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T20:38:43.609814Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-850296","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T20:38:43.609901Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T20:38:43.763689Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T20:38:43.763794Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:38:43.763817Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T20:38:43.763851Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-02T20:38:43.763891Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T20:38:43.763949Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T20:38:43.763985Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T20:38:43.763993Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T20:38:43.764095Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T20:38:43.764141Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T20:38:43.764174Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:38:43.767777Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T20:38:43.767868Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:38:43.767899Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T20:38:43.767905Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-850296","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [7d406b360d906cbd403e5610b39152427208b3006f82823e3a1bc43394a91391] <==
	{"level":"warn","ts":"2025-10-02T20:39:00.838989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.862714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.885092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.903937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.915243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.932238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.966616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.980154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.000818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.014976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.033024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.058714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.108726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.121077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.143692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.156964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.173133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.198478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.223054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.242213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.258901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.322142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51620","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:48:59.449195Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":957}
	{"level":"info","ts":"2025-10-02T20:48:59.457770Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":957,"took":"8.224062ms","hash":3054057781,"current-db-size-bytes":3186688,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":3186688,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2025-10-02T20:48:59.457830Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3054057781,"revision":957,"compact-revision":-1}
	
	
	==> kernel <==
	 20:53:38 up  5:35,  0 user,  load average: 0.06, 0.19, 0.76
	Linux functional-850296 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7878706c55ce34583311ef4a87456c1d6d6e903f7330e166383956d89b187d93] <==
	I1002 20:38:15.626090       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 20:38:15.642339       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 20:38:15.642579       1 main.go:148] setting mtu 1500 for CNI 
	I1002 20:38:15.642605       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 20:38:15.642619       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T20:38:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 20:38:15.823304       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 20:38:15.823379       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 20:38:15.823411       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 20:38:15.826976       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 20:38:20.728679       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 20:38:20.728784       1 metrics.go:72] Registering metrics
	I1002 20:38:20.728874       1 controller.go:711] "Syncing nftables rules"
	I1002 20:38:25.826113       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:38:25.826235       1 main.go:301] handling current node
	I1002 20:38:35.823570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:38:35.823603       1 main.go:301] handling current node
	
	
	==> kindnet [c6de90c680ce5402050e16cb4f6e81ee97109c3bb463f7e3ffae85261344e670] <==
	I1002 20:51:33.519645       1 main.go:301] handling current node
	I1002 20:51:43.519324       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:51:43.519446       1 main.go:301] handling current node
	I1002 20:51:53.519938       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:51:53.519975       1 main.go:301] handling current node
	I1002 20:52:03.520029       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:52:03.520180       1 main.go:301] handling current node
	I1002 20:52:13.522171       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:52:13.522206       1 main.go:301] handling current node
	I1002 20:52:23.528257       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:52:23.528293       1 main.go:301] handling current node
	I1002 20:52:33.522236       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:52:33.522270       1 main.go:301] handling current node
	I1002 20:52:43.519072       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:52:43.519109       1 main.go:301] handling current node
	I1002 20:52:53.519616       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:52:53.519658       1 main.go:301] handling current node
	I1002 20:53:03.527441       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:53:03.527557       1 main.go:301] handling current node
	I1002 20:53:13.519352       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:53:13.519388       1 main.go:301] handling current node
	I1002 20:53:23.522402       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:53:23.522522       1 main.go:301] handling current node
	I1002 20:53:33.522942       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:53:33.522978       1 main.go:301] handling current node
	
	
	==> kube-apiserver [275e899f5200905471afcb9d9b210a0463a726a93b579fb14dc43c0cfc487a07] <==
	I1002 20:39:02.082124       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 20:39:02.082274       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 20:39:02.082870       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 20:39:02.083346       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 20:39:02.083366       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 20:39:02.083449       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 20:39:02.100397       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 20:39:02.102318       1 policy_source.go:240] refreshing policies
	E1002 20:39:02.106557       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 20:39:02.117200       1 cache.go:39] Caches are synced for autoregister controller
	I1002 20:39:02.133480       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 20:39:02.764397       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 20:39:02.886483       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 20:39:04.310121       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 20:39:04.441156       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 20:39:04.513030       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 20:39:04.523147       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 20:39:05.581888       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 20:39:05.733846       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 20:39:05.783096       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 20:39:20.000215       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.221.12"}
	I1002 20:39:26.213086       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.136.83"}
	I1002 20:43:36.099330       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.147.57"}
	I1002 20:45:00.828056       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.9.174"}
	I1002 20:49:02.041972       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [27f03308c19421d82964512a8f4396955b6f0220780d0d43a730552eb475fd76] <==
	I1002 20:39:05.452225       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 20:39:05.452254       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 20:39:05.452281       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 20:39:05.452416       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 20:39:05.452504       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 20:39:05.452600       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 20:39:05.461148       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 20:39:05.459277       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 20:39:05.459325       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:39:05.460306       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 20:39:05.461653       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 20:39:05.461709       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-850296"
	I1002 20:39:05.461789       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 20:39:05.459344       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 20:39:05.460325       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 20:39:05.460562       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 20:39:05.475118       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 20:39:05.476407       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 20:39:05.478941       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 20:39:05.479016       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 20:39:05.485126       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 20:39:05.495376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:39:05.507627       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:39:05.507651       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 20:39:05.507659       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [4c2d0d935a5a32b4e66e39fadf67bd101aafc061bb3de8e074c2c81f0fc0f3f5] <==
	I1002 20:38:23.949252       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 20:38:23.950781       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 20:38:23.955937       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 20:38:23.963251       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:38:23.963276       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 20:38:23.963284       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 20:38:23.972900       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 20:38:23.975307       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 20:38:23.976452       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 20:38:23.976559       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 20:38:23.976601       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 20:38:23.976579       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 20:38:23.976716       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 20:38:23.976803       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 20:38:23.976567       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 20:38:23.976591       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 20:38:23.977108       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 20:38:23.977593       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-850296"
	I1002 20:38:23.977647       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 20:38:23.985638       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 20:38:23.985687       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 20:38:23.985706       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 20:38:23.985711       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 20:38:23.985717       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 20:38:23.990173       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8dfb5e0595e0720813e66577e5555d958f1259cee1c6366fa3f443e2b14c0ae1] <==
	I1002 20:39:03.240087       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:39:03.376638       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:39:03.478249       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:39:03.478356       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:39:03.478468       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:39:03.497236       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:39:03.497288       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:39:03.501346       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:39:03.501743       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:39:03.501818       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:39:03.505948       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:39:03.506161       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:39:03.506197       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:39:03.506932       1 config.go:309] "Starting node config controller"
	I1002 20:39:03.506952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:39:03.506959       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:39:03.507525       1 config.go:200] "Starting service config controller"
	I1002 20:39:03.507544       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:39:03.506028       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:39:03.609852       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:39:03.609911       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 20:39:03.611217       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a795ea3c6cfd97f06b7f92521162a0989d1af0abdd38b203d6a33e500b3e7d09] <==
	I1002 20:38:18.258892       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:38:18.975793       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:38:20.801120       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:38:20.808394       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:38:20.821259       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:38:21.048413       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:38:21.048537       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:38:21.070806       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:38:21.071210       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:38:21.071227       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:38:21.072703       1 config.go:200] "Starting service config controller"
	I1002 20:38:21.072770       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:38:21.072820       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:38:21.079579       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:38:21.079688       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 20:38:21.076958       1 config.go:309] "Starting node config controller"
	I1002 20:38:21.079781       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:38:21.079810       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:38:21.075789       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:38:21.079890       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:38:21.079926       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 20:38:21.173900       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1d131e04547edd912ef6b1b2a69a2e3c509e8bd119fdbc1e1e5e804ca19c5da5] <==
	I1002 20:39:00.118670       1 serving.go:386] Generated self-signed cert in-memory
	W1002 20:39:02.038429       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 20:39:02.038553       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 20:39:02.038590       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 20:39:02.038640       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 20:39:02.071647       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 20:39:02.074053       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:39:02.076511       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:39:02.076614       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:39:02.077076       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 20:39:02.077148       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 20:39:02.178156       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [57a5c63b7515c8ddd3572dbb6f31d3c324ab7dff05913ef9a6085cecc8fbd5ea] <==
	I1002 20:38:18.329695       1 serving.go:386] Generated self-signed cert in-memory
	W1002 20:38:20.519064       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 20:38:20.519089       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 20:38:20.519099       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 20:38:20.519118       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 20:38:20.630871       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 20:38:20.630903       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:38:20.641277       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 20:38:20.654143       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:38:20.658183       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:38:20.658269       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 20:38:20.760895       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:38:43.615409       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 20:38:43.615431       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 20:38:43.615453       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 20:38:43.615508       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:38:43.615682       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 20:38:43.615697       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 20:51:46 functional-850296 kubelet[3848]: E1002 20:51:46.746207    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bqdjf" podUID="1944770c-61a2-4381-867a-98a7fe0db025"
	Oct 02 20:51:49 functional-850296 kubelet[3848]: E1002 20:51:49.746534    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ad429109-a08a-40c0-b1c2-ee8a50977fb1"
	Oct 02 20:51:50 functional-850296 kubelet[3848]: E1002 20:51:50.746556    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-h8qf6" podUID="35db67a4-9151-4d30-8df4-3dd0f8212370"
	Oct 02 20:51:54 functional-850296 kubelet[3848]: E1002 20:51:54.747324    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	Oct 02 20:52:01 functional-850296 kubelet[3848]: E1002 20:52:01.745416    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bqdjf" podUID="1944770c-61a2-4381-867a-98a7fe0db025"
	Oct 02 20:52:04 functional-850296 kubelet[3848]: E1002 20:52:04.746019    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ad429109-a08a-40c0-b1c2-ee8a50977fb1"
	Oct 02 20:52:05 functional-850296 kubelet[3848]: E1002 20:52:05.745476    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-h8qf6" podUID="35db67a4-9151-4d30-8df4-3dd0f8212370"
	Oct 02 20:52:06 functional-850296 kubelet[3848]: E1002 20:52:06.746123    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	Oct 02 20:52:15 functional-850296 kubelet[3848]: E1002 20:52:15.744676    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bqdjf" podUID="1944770c-61a2-4381-867a-98a7fe0db025"
	Oct 02 20:52:15 functional-850296 kubelet[3848]: E1002 20:52:15.746019    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ad429109-a08a-40c0-b1c2-ee8a50977fb1"
	Oct 02 20:52:20 functional-850296 kubelet[3848]: E1002 20:52:20.746409    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-h8qf6" podUID="35db67a4-9151-4d30-8df4-3dd0f8212370"
	Oct 02 20:52:21 functional-850296 kubelet[3848]: E1002 20:52:21.744842    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	Oct 02 20:52:28 functional-850296 kubelet[3848]: E1002 20:52:28.745819    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bqdjf" podUID="1944770c-61a2-4381-867a-98a7fe0db025"
	Oct 02 20:52:34 functional-850296 kubelet[3848]: E1002 20:52:34.745540    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-h8qf6" podUID="35db67a4-9151-4d30-8df4-3dd0f8212370"
	Oct 02 20:52:35 functional-850296 kubelet[3848]: E1002 20:52:35.745034    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	Oct 02 20:52:40 functional-850296 kubelet[3848]: E1002 20:52:40.745623    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bqdjf" podUID="1944770c-61a2-4381-867a-98a7fe0db025"
	Oct 02 20:52:46 functional-850296 kubelet[3848]: E1002 20:52:46.746746    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	Oct 02 20:52:52 functional-850296 kubelet[3848]: E1002 20:52:52.745722    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bqdjf" podUID="1944770c-61a2-4381-867a-98a7fe0db025"
	Oct 02 20:53:01 functional-850296 kubelet[3848]: E1002 20:53:01.745505    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	Oct 02 20:53:07 functional-850296 kubelet[3848]: E1002 20:53:07.745201    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bqdjf" podUID="1944770c-61a2-4381-867a-98a7fe0db025"
	Oct 02 20:53:13 functional-850296 kubelet[3848]: E1002 20:53:13.745248    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	Oct 02 20:53:22 functional-850296 kubelet[3848]: E1002 20:53:22.745553    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bqdjf" podUID="1944770c-61a2-4381-867a-98a7fe0db025"
	Oct 02 20:53:25 functional-850296 kubelet[3848]: E1002 20:53:25.744823    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	Oct 02 20:53:36 functional-850296 kubelet[3848]: E1002 20:53:36.745576    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bqdjf" podUID="1944770c-61a2-4381-867a-98a7fe0db025"
	Oct 02 20:53:36 functional-850296 kubelet[3848]: E1002 20:53:36.746709    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	
	
	==> storage-provisioner [28b49cfffc6351da29c7557ee872755ca084db930b14770b1ba25cf3d451dfe7] <==
	W1002 20:53:14.594268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:16.596876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:16.603284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:18.606299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:18.610412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:20.613875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:20.620271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:22.623211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:22.628031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:24.631459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:24.635914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:26.639304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:26.643971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:28.651726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:28.659490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:30.662448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:30.667326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:32.669888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:32.674222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:34.677548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:34.682293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:36.684843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:36.690197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:38.694503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:53:38.701397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cdb96f1a50245e85028f34a2bd241e7e6b08bf2bce15ce95f6bfafc37d115ecf] <==
	I1002 20:38:15.999486       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 20:38:20.874431       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 20:38:20.874559       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 20:38:20.899722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:24.365016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:28.625466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:32.223997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:35.277427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:38.300024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:38.305158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 20:38:38.305311       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 20:38:38.305677       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99de5c4e-838e-4677-b696-969817484c14", APIVersion:"v1", ResourceVersion:"530", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-850296_5e7e7349-27d6-469b-9048-22c092eda1c6 became leader
	I1002 20:38:38.305708       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-850296_5e7e7349-27d6-469b-9048-22c092eda1c6!
	W1002 20:38:38.307641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:38.316877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 20:38:38.406699       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-850296_5e7e7349-27d6-469b-9048-22c092eda1c6!
	W1002 20:38:40.319248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:40.326803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:42.337454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:42.346693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-850296 -n functional-850296
helpers_test.go:269: (dbg) Run:  kubectl --context functional-850296 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-bqdjf hello-node-connect-7d85dfc575-h8qf6 nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-850296 describe pod hello-node-75c85bcc94-bqdjf hello-node-connect-7d85dfc575-h8qf6 nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-850296 describe pod hello-node-75c85bcc94-bqdjf hello-node-connect-7d85dfc575-h8qf6 nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-bqdjf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-850296/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:45:00 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8cxcc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8cxcc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m38s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bqdjf to functional-850296
	  Normal   Pulling    2m24s (x5 over 8m38s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     2m21s (x5 over 7m27s)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     2m21s (x5 over 7m27s)  kubelet            Error: ErrImagePull
	  Warning  Failed     59s (x16 over 7m27s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3s (x20 over 7m27s)    kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-h8qf6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-850296/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:43:36 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fsbhd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fsbhd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-h8qf6 to functional-850296
	  Normal   Pulling    3m34s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m34s (x5 over 8m34s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     3m34s (x5 over 8m34s)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m13s (x16 over 8m33s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    65s (x21 over 8m33s)    kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-850296/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:39:26 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8rxbd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8rxbd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/nginx-svc to functional-850296
	  Warning  Failed     13m                   kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m55s (x5 over 14m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m54s (x5 over 13m)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m54s (x4 over 11m)   kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m38s (x17 over 13m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    95s (x22 over 13m)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-850296/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:39:32 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4br99 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-4br99:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  14m                    default-scheduler  Successfully assigned default/sp-pod to functional-850296
	  Warning  Failed     9m34s (x2 over 12m)    kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m28s (x5 over 14m)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m21s (x5 over 12m)    kubelet            Error: ErrImagePull
	  Warning  Failed     2m21s (x3 over 7m27s)  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     53s (x17 over 12m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3s (x21 over 12m)      kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.52s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (249.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4923d052-4ebe-4955-9e43-6337406d02fa] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003246584s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-850296 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-850296 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-850296 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-850296 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3444e5e8-03bd-4963-9c74-f52b3adfa223] Pending
helpers_test.go:352: "sp-pod" [3444e5e8-03bd-4963-9c74-f52b3adfa223] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1002 20:41:34.539632  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:42:02.241541  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-850296 -n functional-850296
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-02 20:43:33.151097586 +0000 UTC m=+1529.251034124
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-850296 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-850296 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-850296/192.168.49.2
Start Time:       Thu, 02 Oct 2025 20:39:32 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4br99 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-4br99:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  4m                 default-scheduler  Successfully assigned default/sp-pod to functional-850296
Warning  Failed     2m                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m                 kubelet            Error: ErrImagePull
Normal   BackOff    119s               kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     119s               kubelet            Error: ImagePullBackOff
Normal   Pulling    108s (x2 over 4m)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-850296 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-850296 logs sp-pod -n default: exit status 1 (86.327063ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-850296 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-850296
helpers_test.go:243: (dbg) docker inspect functional-850296:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc",
	        "Created": "2025-10-02T20:36:51.435019192Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1013336,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:36:51.495993066Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc/hostname",
	        "HostsPath": "/var/lib/docker/containers/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc/hosts",
	        "LogPath": "/var/lib/docker/containers/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc-json.log",
	        "Name": "/functional-850296",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-850296:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-850296",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc",
	                "LowerDir": "/var/lib/docker/overlay2/38ed9a4dce5ec80b8bf63eef4ac405791d407da39ca89db404ba921270c9d947-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38ed9a4dce5ec80b8bf63eef4ac405791d407da39ca89db404ba921270c9d947/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38ed9a4dce5ec80b8bf63eef4ac405791d407da39ca89db404ba921270c9d947/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38ed9a4dce5ec80b8bf63eef4ac405791d407da39ca89db404ba921270c9d947/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-850296",
	                "Source": "/var/lib/docker/volumes/functional-850296/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-850296",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-850296",
	                "name.minikube.sigs.k8s.io": "functional-850296",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb52b33547bb096a1a4f615461c35a0bcedb7dcf2cb23f80fe4ff73d51497877",
	            "SandboxKey": "/var/run/docker/netns/cb52b33547bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33914"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33912"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33913"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-850296": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d6:c1:25:47:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "56b52cdb1d427e44c48e269bed51ab58dc1dd45aa5f7a71ed9c387d2a4680ab1",
	                    "EndpointID": "4e73ea947047ef10a5fe342cfe5413df47326a143b97016ab2d446b820f6b9a8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-850296",
	                        "b3320f49b450"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-850296 -n functional-850296
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 logs -n 25
I1002 20:43:33.786118  993954 retry.go:31] will retry after 6.802580393s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-850296 logs -n 25: (1.427320754s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-850296 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                   │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │                     │
	│ cache   │ functional-850296 cache reload                                                                                            │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │ 02 Oct 25 20:38 UTC │
	│ ssh     │ functional-850296 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                   │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │ 02 Oct 25 20:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │ 02 Oct 25 20:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                       │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │ 02 Oct 25 20:38 UTC │
	│ kubectl │ functional-850296 kubectl -- --context functional-850296 get pods                                                         │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │ 02 Oct 25 20:38 UTC │
	│ start   │ -p functional-850296 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                  │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │ 02 Oct 25 20:39 UTC │
	│ service │ invalid-svc -p functional-850296                                                                                          │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ config  │ functional-850296 config unset cpus                                                                                       │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ cp      │ functional-850296 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                        │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ config  │ functional-850296 config get cpus                                                                                         │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ config  │ functional-850296 config set cpus 2                                                                                       │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ config  │ functional-850296 config get cpus                                                                                         │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ config  │ functional-850296 config unset cpus                                                                                       │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ ssh     │ functional-850296 ssh -n functional-850296 sudo cat /home/docker/cp-test.txt                                              │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ config  │ functional-850296 config get cpus                                                                                         │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ ssh     │ functional-850296 ssh echo hello                                                                                          │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ cp      │ functional-850296 cp functional-850296:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd631086801/001/cp-test.txt │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ ssh     │ functional-850296 ssh cat /etc/hostname                                                                                   │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ ssh     │ functional-850296 ssh -n functional-850296 sudo cat /home/docker/cp-test.txt                                              │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ tunnel  │ functional-850296 tunnel --alsologtostderr                                                                                │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ tunnel  │ functional-850296 tunnel --alsologtostderr                                                                                │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ cp      │ functional-850296 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ tunnel  │ functional-850296 tunnel --alsologtostderr                                                                                │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ ssh     │ functional-850296 ssh -n functional-850296 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-850296 │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:38:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:38:42.361139 1017493 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:38:42.361285 1017493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:38:42.361289 1017493 out.go:374] Setting ErrFile to fd 2...
	I1002 20:38:42.361293 1017493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:38:42.361609 1017493 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:38:42.362026 1017493 out.go:368] Setting JSON to false
	I1002 20:38:42.363130 1017493 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19260,"bootTime":1759418263,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 20:38:42.363211 1017493 start.go:140] virtualization:  
	I1002 20:38:42.366893 1017493 out.go:179] * [functional-850296] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:38:42.370878 1017493 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:38:42.371005 1017493 notify.go:221] Checking for updates...
	I1002 20:38:42.377020 1017493 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:38:42.380139 1017493 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:38:42.382947 1017493 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 20:38:42.385838 1017493 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:38:42.388854 1017493 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:38:42.392700 1017493 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:38:42.392808 1017493 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:38:42.426202 1017493 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:38:42.426308 1017493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:38:42.483374 1017493 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 20:38:42.474408682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:38:42.483465 1017493 docker.go:319] overlay module found
	I1002 20:38:42.486484 1017493 out.go:179] * Using the docker driver based on existing profile
	I1002 20:38:42.489339 1017493 start.go:306] selected driver: docker
	I1002 20:38:42.489347 1017493 start.go:936] validating driver "docker" against &{Name:functional-850296 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-850296 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:38:42.489430 1017493 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:38:42.489567 1017493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:38:42.542151 1017493 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 20:38:42.532636676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:38:42.542575 1017493 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:38:42.542601 1017493 cni.go:84] Creating CNI manager for ""
	I1002 20:38:42.542657 1017493 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:38:42.542696 1017493 start.go:350] cluster config:
	{Name:functional-850296 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-850296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:38:42.547700 1017493 out.go:179] * Starting "functional-850296" primary control-plane node in "functional-850296" cluster
	I1002 20:38:42.550619 1017493 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:38:42.553569 1017493 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:38:42.556441 1017493 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:38:42.556486 1017493 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 20:38:42.556507 1017493 cache.go:59] Caching tarball of preloaded images
	I1002 20:38:42.556537 1017493 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:38:42.556590 1017493 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 20:38:42.556604 1017493 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:38:42.556719 1017493 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/config.json ...
	I1002 20:38:42.575706 1017493 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:38:42.575718 1017493 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:38:42.575736 1017493 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:38:42.575758 1017493 start.go:361] acquireMachinesLock for functional-850296: {Name:mk32592cb97eb8369193d1c54e8256f2b98af5f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:38:42.575823 1017493 start.go:365] duration metric: took 45.972µs to acquireMachinesLock for "functional-850296"
	I1002 20:38:42.575842 1017493 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:38:42.575847 1017493 fix.go:55] fixHost starting: 
	I1002 20:38:42.576106 1017493 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
	I1002 20:38:42.594086 1017493 fix.go:113] recreateIfNeeded on functional-850296: state=Running err=<nil>
	W1002 20:38:42.594106 1017493 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:38:42.597347 1017493 out.go:252] * Updating the running docker "functional-850296" container ...
	I1002 20:38:42.597374 1017493 machine.go:93] provisionDockerMachine start ...
	I1002 20:38:42.597465 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:42.614839 1017493 main.go:141] libmachine: Using SSH client type: native
	I1002 20:38:42.615145 1017493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33910 <nil> <nil>}
	I1002 20:38:42.615152 1017493 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:38:42.750456 1017493 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-850296
	
	I1002 20:38:42.750484 1017493 ubuntu.go:182] provisioning hostname "functional-850296"
	I1002 20:38:42.750550 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:42.770482 1017493 main.go:141] libmachine: Using SSH client type: native
	I1002 20:38:42.770789 1017493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33910 <nil> <nil>}
	I1002 20:38:42.770798 1017493 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-850296 && echo "functional-850296" | sudo tee /etc/hostname
	I1002 20:38:42.911274 1017493 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-850296
	
	I1002 20:38:42.911340 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:42.929615 1017493 main.go:141] libmachine: Using SSH client type: native
	I1002 20:38:42.929912 1017493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33910 <nil> <nil>}
	I1002 20:38:42.929927 1017493 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-850296' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-850296/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-850296' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:38:43.062557 1017493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:38:43.062572 1017493 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 20:38:43.062609 1017493 ubuntu.go:190] setting up certificates
	I1002 20:38:43.062617 1017493 provision.go:84] configureAuth start
	I1002 20:38:43.062676 1017493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-850296
	I1002 20:38:43.080957 1017493 provision.go:143] copyHostCerts
	I1002 20:38:43.081017 1017493 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 20:38:43.081033 1017493 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 20:38:43.081113 1017493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 20:38:43.081210 1017493 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 20:38:43.081213 1017493 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 20:38:43.081237 1017493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 20:38:43.081283 1017493 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 20:38:43.081286 1017493 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 20:38:43.081314 1017493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 20:38:43.081383 1017493 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.functional-850296 san=[127.0.0.1 192.168.49.2 functional-850296 localhost minikube]
	I1002 20:38:43.253493 1017493 provision.go:177] copyRemoteCerts
	I1002 20:38:43.253557 1017493 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:38:43.253600 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:43.275038 1017493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
	I1002 20:38:43.374121 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 20:38:43.392340 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:38:43.411862 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:38:43.430306 1017493 provision.go:87] duration metric: took 367.65375ms to configureAuth
	I1002 20:38:43.430324 1017493 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:38:43.430522 1017493 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:38:43.430623 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:43.450151 1017493 main.go:141] libmachine: Using SSH client type: native
	I1002 20:38:43.450492 1017493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33910 <nil> <nil>}
	I1002 20:38:43.450507 1017493 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:38:48.831847 1017493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:38:48.831860 1017493 machine.go:96] duration metric: took 6.234479924s to provisionDockerMachine
	I1002 20:38:48.831870 1017493 start.go:294] postStartSetup for "functional-850296" (driver="docker")
	I1002 20:38:48.831879 1017493 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:38:48.831948 1017493 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:38:48.831987 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:48.850548 1017493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
	I1002 20:38:48.945891 1017493 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:38:48.949234 1017493 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:38:48.949251 1017493 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:38:48.949261 1017493 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 20:38:48.949320 1017493 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 20:38:48.949394 1017493 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 20:38:48.949470 1017493 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/test/nested/copy/993954/hosts -> hosts in /etc/test/nested/copy/993954
	I1002 20:38:48.949513 1017493 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/993954
	I1002 20:38:48.956830 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 20:38:48.973888 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/test/nested/copy/993954/hosts --> /etc/test/nested/copy/993954/hosts (40 bytes)
	I1002 20:38:48.991067 1017493 start.go:297] duration metric: took 159.183681ms for postStartSetup
	I1002 20:38:48.991153 1017493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:38:48.991191 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:49.009230 1017493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
	I1002 20:38:49.103201 1017493 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:38:49.108852 1017493 fix.go:57] duration metric: took 6.532998032s for fixHost
	I1002 20:38:49.108867 1017493 start.go:84] releasing machines lock for "functional-850296", held for 6.533036833s
	I1002 20:38:49.108934 1017493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-850296
	I1002 20:38:49.126629 1017493 ssh_runner.go:195] Run: cat /version.json
	I1002 20:38:49.126675 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:49.126707 1017493 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:38:49.126757 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:38:49.149171 1017493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
	I1002 20:38:49.153001 1017493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
	I1002 20:38:49.241839 1017493 ssh_runner.go:195] Run: systemctl --version
	I1002 20:38:49.335144 1017493 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:38:49.370778 1017493 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:38:49.375119 1017493 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:38:49.375190 1017493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:38:49.382938 1017493 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:38:49.382951 1017493 start.go:496] detecting cgroup driver to use...
	I1002 20:38:49.382984 1017493 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 20:38:49.383040 1017493 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:38:49.398078 1017493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:38:49.411173 1017493 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:38:49.411230 1017493 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:38:49.426897 1017493 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:38:49.441132 1017493 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:38:49.575868 1017493 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:38:49.714615 1017493 docker.go:234] disabling docker service ...
	I1002 20:38:49.714671 1017493 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:38:49.731079 1017493 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:38:49.744151 1017493 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:38:49.902454 1017493 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:38:50.048738 1017493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:38:50.062275 1017493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:38:50.080400 1017493 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:38:50.080465 1017493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:38:50.091115 1017493 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 20:38:50.091181 1017493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:38:50.102009 1017493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:38:50.112282 1017493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:38:50.122366 1017493 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:38:50.131366 1017493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:38:50.140773 1017493 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:38:50.150486 1017493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:38:50.159801 1017493 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:38:50.167865 1017493 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:38:50.175826 1017493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:38:50.305209 1017493 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:38:50.505293 1017493 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:38:50.505355 1017493 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:38:50.509309 1017493 start.go:564] Will wait 60s for crictl version
	I1002 20:38:50.509362 1017493 ssh_runner.go:195] Run: which crictl
	I1002 20:38:50.513080 1017493 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:38:50.538463 1017493 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:38:50.538554 1017493 ssh_runner.go:195] Run: crio --version
	I1002 20:38:50.566263 1017493 ssh_runner.go:195] Run: crio --version
	I1002 20:38:50.600610 1017493 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:38:50.603732 1017493 cli_runner.go:164] Run: docker network inspect functional-850296 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:38:50.619742 1017493 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:38:50.626736 1017493 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 20:38:50.629472 1017493 kubeadm.go:883] updating cluster {Name:functional-850296 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-850296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:38:50.629598 1017493 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:38:50.629670 1017493 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:38:50.666818 1017493 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:38:50.666830 1017493 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:38:50.666884 1017493 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:38:50.693412 1017493 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:38:50.693424 1017493 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:38:50.693430 1017493 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:38:50.693521 1017493 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-850296 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-850296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:38:50.693624 1017493 ssh_runner.go:195] Run: crio config
	I1002 20:38:50.759664 1017493 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 20:38:50.759696 1017493 cni.go:84] Creating CNI manager for ""
	I1002 20:38:50.759706 1017493 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:38:50.759714 1017493 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:38:50.759736 1017493 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-850296 NodeName:functional-850296 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:38:50.759859 1017493 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-850296"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:38:50.759929 1017493 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:38:50.768165 1017493 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:38:50.768233 1017493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:38:50.775885 1017493 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:38:50.788993 1017493 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:38:50.801999 1017493 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1002 20:38:50.814950 1017493 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:38:50.819539 1017493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:38:50.953656 1017493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:38:50.968784 1017493 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296 for IP: 192.168.49.2
	I1002 20:38:50.968795 1017493 certs.go:195] generating shared ca certs ...
	I1002 20:38:50.968815 1017493 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:38:50.968981 1017493 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 20:38:50.969027 1017493 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 20:38:50.969033 1017493 certs.go:257] generating profile certs ...
	I1002 20:38:50.969138 1017493 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.key
	I1002 20:38:50.969184 1017493 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/apiserver.key.e3856381
	I1002 20:38:50.969238 1017493 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/proxy-client.key
	I1002 20:38:50.969373 1017493 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 20:38:50.969402 1017493 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 20:38:50.969409 1017493 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 20:38:50.969434 1017493 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 20:38:50.969462 1017493 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:38:50.969483 1017493 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 20:38:50.969544 1017493 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 20:38:50.970419 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:38:50.989341 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:38:51.010309 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:38:51.029172 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 20:38:51.045959 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:38:51.063264 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:38:51.081582 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:38:51.101351 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:38:51.120306 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 20:38:51.139879 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:38:51.158875 1017493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 20:38:51.177954 1017493 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:38:51.190525 1017493 ssh_runner.go:195] Run: openssl version
	I1002 20:38:51.196837 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 20:38:51.204824 1017493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 20:38:51.208720 1017493 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 20:38:51.208779 1017493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 20:38:51.249859 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:38:51.258091 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:38:51.268417 1017493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:38:51.273197 1017493 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:38:51.273253 1017493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:38:51.323556 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:38:51.334073 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 20:38:51.342873 1017493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 20:38:51.349848 1017493 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 20:38:51.349917 1017493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 20:38:51.413700 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 20:38:51.423808 1017493 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:38:51.429889 1017493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:38:51.471943 1017493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:38:51.515762 1017493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:38:51.557180 1017493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:38:51.598791 1017493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:38:51.640333 1017493 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:38:51.681546 1017493 kubeadm.go:400] StartCluster: {Name:functional-850296 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-850296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:38:51.681624 1017493 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:38:51.681701 1017493 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:38:51.711440 1017493 cri.go:89] found id: "6d1248452ad29e3224296cf66f6505e875f1159881235dedcbe9793d3c9f615e"
	I1002 20:38:51.711452 1017493 cri.go:89] found id: "4c2d0d935a5a32b4e66e39fadf67bd101aafc061bb3de8e074c2c81f0fc0f3f5"
	I1002 20:38:51.711455 1017493 cri.go:89] found id: "57a5c63b7515c8ddd3572dbb6f31d3c324ab7dff05913ef9a6085cecc8fbd5ea"
	I1002 20:38:51.711463 1017493 cri.go:89] found id: "cdb96f1a50245e85028f34a2bd241e7e6b08bf2bce15ce95f6bfafc37d115ecf"
	I1002 20:38:51.711468 1017493 cri.go:89] found id: "7878706c55ce34583311ef4a87456c1d6d6e903f7330e166383956d89b187d93"
	I1002 20:38:51.711470 1017493 cri.go:89] found id: "c9663fe1dfee7d57f6b8c7bd72b81a70c5afcc4aa55c9450e671cea65a3c06e1"
	I1002 20:38:51.711472 1017493 cri.go:89] found id: "a795ea3c6cfd97f06b7f92521162a0989d1af0abdd38b203d6a33e500b3e7d09"
	I1002 20:38:51.711474 1017493 cri.go:89] found id: "4a400288695a98aab58ee93cfd146191462b78481a90c612b491e094f0d7b00c"
	I1002 20:38:51.711476 1017493 cri.go:89] found id: ""
	I1002 20:38:51.711528 1017493 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 20:38:51.723749 1017493 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:38:51Z" level=error msg="open /run/runc: no such file or directory"
	I1002 20:38:51.723855 1017493 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:38:51.732599 1017493 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:38:51.732609 1017493 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:38:51.732665 1017493 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:38:51.740815 1017493 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:38:51.741331 1017493 kubeconfig.go:125] found "functional-850296" server: "https://192.168.49.2:8441"
	I1002 20:38:51.742700 1017493 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:38:51.750496 1017493 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 20:37:01.216112370 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 20:38:50.809349143 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 20:38:51.750506 1017493 kubeadm.go:1160] stopping kube-system containers ...
	I1002 20:38:51.750525 1017493 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 20:38:51.750583 1017493 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:38:51.778347 1017493 cri.go:89] found id: "6d1248452ad29e3224296cf66f6505e875f1159881235dedcbe9793d3c9f615e"
	I1002 20:38:51.778359 1017493 cri.go:89] found id: "4c2d0d935a5a32b4e66e39fadf67bd101aafc061bb3de8e074c2c81f0fc0f3f5"
	I1002 20:38:51.778363 1017493 cri.go:89] found id: "57a5c63b7515c8ddd3572dbb6f31d3c324ab7dff05913ef9a6085cecc8fbd5ea"
	I1002 20:38:51.778365 1017493 cri.go:89] found id: "cdb96f1a50245e85028f34a2bd241e7e6b08bf2bce15ce95f6bfafc37d115ecf"
	I1002 20:38:51.778368 1017493 cri.go:89] found id: "7878706c55ce34583311ef4a87456c1d6d6e903f7330e166383956d89b187d93"
	I1002 20:38:51.778371 1017493 cri.go:89] found id: "c9663fe1dfee7d57f6b8c7bd72b81a70c5afcc4aa55c9450e671cea65a3c06e1"
	I1002 20:38:51.778373 1017493 cri.go:89] found id: "a795ea3c6cfd97f06b7f92521162a0989d1af0abdd38b203d6a33e500b3e7d09"
	I1002 20:38:51.778376 1017493 cri.go:89] found id: "4a400288695a98aab58ee93cfd146191462b78481a90c612b491e094f0d7b00c"
	I1002 20:38:51.778378 1017493 cri.go:89] found id: ""
	I1002 20:38:51.778383 1017493 cri.go:252] Stopping containers: [6d1248452ad29e3224296cf66f6505e875f1159881235dedcbe9793d3c9f615e 4c2d0d935a5a32b4e66e39fadf67bd101aafc061bb3de8e074c2c81f0fc0f3f5 57a5c63b7515c8ddd3572dbb6f31d3c324ab7dff05913ef9a6085cecc8fbd5ea cdb96f1a50245e85028f34a2bd241e7e6b08bf2bce15ce95f6bfafc37d115ecf 7878706c55ce34583311ef4a87456c1d6d6e903f7330e166383956d89b187d93 c9663fe1dfee7d57f6b8c7bd72b81a70c5afcc4aa55c9450e671cea65a3c06e1 a795ea3c6cfd97f06b7f92521162a0989d1af0abdd38b203d6a33e500b3e7d09 4a400288695a98aab58ee93cfd146191462b78481a90c612b491e094f0d7b00c]
	I1002 20:38:51.778439 1017493 ssh_runner.go:195] Run: which crictl
	I1002 20:38:51.782308 1017493 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 6d1248452ad29e3224296cf66f6505e875f1159881235dedcbe9793d3c9f615e 4c2d0d935a5a32b4e66e39fadf67bd101aafc061bb3de8e074c2c81f0fc0f3f5 57a5c63b7515c8ddd3572dbb6f31d3c324ab7dff05913ef9a6085cecc8fbd5ea cdb96f1a50245e85028f34a2bd241e7e6b08bf2bce15ce95f6bfafc37d115ecf 7878706c55ce34583311ef4a87456c1d6d6e903f7330e166383956d89b187d93 c9663fe1dfee7d57f6b8c7bd72b81a70c5afcc4aa55c9450e671cea65a3c06e1 a795ea3c6cfd97f06b7f92521162a0989d1af0abdd38b203d6a33e500b3e7d09 4a400288695a98aab58ee93cfd146191462b78481a90c612b491e094f0d7b00c
	I1002 20:38:51.853650 1017493 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 20:38:51.977243 1017493 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:38:51.985605 1017493 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 20:37 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  2 20:37 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  2 20:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  2 20:37 /etc/kubernetes/scheduler.conf
	
	I1002 20:38:51.985689 1017493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:38:51.994394 1017493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:38:52.008229 1017493 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:38:52.008294 1017493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:38:52.016389 1017493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:38:52.024460 1017493 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:38:52.024520 1017493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:38:52.032490 1017493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:38:52.040551 1017493 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:38:52.040614 1017493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:38:52.048422 1017493 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:38:52.057120 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:38:52.109514 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:38:56.364753 1017493 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (4.255213164s)
	I1002 20:38:56.364813 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:38:56.595336 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:38:56.666839 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:38:56.723308 1017493 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:38:56.723375 1017493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:38:57.224304 1017493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:38:57.724442 1017493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:38:57.742432 1017493 api_server.go:72] duration metric: took 1.01913568s to wait for apiserver process to appear ...
	I1002 20:38:57.742446 1017493 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:38:57.742464 1017493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:39:01.944066 1017493 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 20:39:01.944084 1017493 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 20:39:01.944097 1017493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:39:01.959670 1017493 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 20:39:01.959687 1017493 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 20:39:02.243131 1017493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:39:02.252011 1017493 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:39:02.252053 1017493 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:39:02.742591 1017493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:39:02.764212 1017493 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:39:02.764235 1017493 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:39:03.242738 1017493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:39:03.252790 1017493 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:39:03.252808 1017493 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:39:03.743157 1017493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:39:03.753263 1017493 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 20:39:03.769068 1017493 api_server.go:141] control plane version: v1.34.1
	I1002 20:39:03.769084 1017493 api_server.go:131] duration metric: took 6.026632462s to wait for apiserver health ...
	I1002 20:39:03.769092 1017493 cni.go:84] Creating CNI manager for ""
	I1002 20:39:03.769098 1017493 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:39:03.772711 1017493 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 20:39:03.775809 1017493 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 20:39:03.780955 1017493 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 20:39:03.780966 1017493 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 20:39:03.795200 1017493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 20:39:04.317409 1017493 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:39:04.320784 1017493 system_pods.go:59] 8 kube-system pods found
	I1002 20:39:04.320811 1017493 system_pods.go:61] "coredns-66bc5c9577-j9sfw" [22774fdc-2372-4f6c-a311-f87f3c1da6b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:39:04.320821 1017493 system_pods.go:61] "etcd-functional-850296" [2deb9442-175c-4270-a03a-3fe07c68d042] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:39:04.320826 1017493 system_pods.go:61] "kindnet-hzdd7" [d54e1d14-95ee-4a95-93eb-9cb09d7fa4c7] Running
	I1002 20:39:04.320834 1017493 system_pods.go:61] "kube-apiserver-functional-850296" [27680d93-7739-4544-ad2c-7916686d9754] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:39:04.320839 1017493 system_pods.go:61] "kube-controller-manager-functional-850296" [c60690e8-4407-4308-8cad-74366a17a41c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:39:04.320844 1017493 system_pods.go:61] "kube-proxy-jf4r2" [597f79f9-4539-419c-bc29-29bd6f37112f] Running
	I1002 20:39:04.320849 1017493 system_pods.go:61] "kube-scheduler-functional-850296" [e8e589ab-dd32-46d2-8c98-8b4759d57a5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:39:04.320853 1017493 system_pods.go:61] "storage-provisioner" [4923d052-4ebe-4955-9e43-6337406d02fa] Running
	I1002 20:39:04.320859 1017493 system_pods.go:74] duration metric: took 3.43947ms to wait for pod list to return data ...
	I1002 20:39:04.320865 1017493 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:39:04.323907 1017493 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 20:39:04.323927 1017493 node_conditions.go:123] node cpu capacity is 2
	I1002 20:39:04.323937 1017493 node_conditions.go:105] duration metric: took 3.068271ms to run NodePressure ...
	I1002 20:39:04.323997 1017493 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:39:04.584087 1017493 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1002 20:39:04.587452 1017493 kubeadm.go:743] kubelet initialised
	I1002 20:39:04.587463 1017493 kubeadm.go:744] duration metric: took 3.362533ms waiting for restarted kubelet to initialise ...
	I1002 20:39:04.587476 1017493 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:39:04.596157 1017493 ops.go:34] apiserver oom_adj: -16
	I1002 20:39:04.596168 1017493 kubeadm.go:601] duration metric: took 12.863554s to restartPrimaryControlPlane
	I1002 20:39:04.596175 1017493 kubeadm.go:402] duration metric: took 12.914647323s to StartCluster
	I1002 20:39:04.596189 1017493 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:39:04.596249 1017493 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:39:04.596931 1017493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:39:04.597203 1017493 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:39:04.597448 1017493 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:39:04.597478 1017493 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:39:04.597533 1017493 addons.go:69] Setting storage-provisioner=true in profile "functional-850296"
	I1002 20:39:04.597556 1017493 addons.go:238] Setting addon storage-provisioner=true in "functional-850296"
	W1002 20:39:04.597561 1017493 addons.go:247] addon storage-provisioner should already be in state true
	I1002 20:39:04.597580 1017493 host.go:66] Checking if "functional-850296" exists ...
	I1002 20:39:04.597613 1017493 addons.go:69] Setting default-storageclass=true in profile "functional-850296"
	I1002 20:39:04.597628 1017493 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-850296"
	I1002 20:39:04.597925 1017493 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
	I1002 20:39:04.597984 1017493 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
	I1002 20:39:04.601824 1017493 out.go:179] * Verifying Kubernetes components...
	I1002 20:39:04.609249 1017493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:39:04.625293 1017493 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:39:04.627276 1017493 addons.go:238] Setting addon default-storageclass=true in "functional-850296"
	W1002 20:39:04.627286 1017493 addons.go:247] addon default-storageclass should already be in state true
	I1002 20:39:04.627354 1017493 host.go:66] Checking if "functional-850296" exists ...
	I1002 20:39:04.627780 1017493 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
	I1002 20:39:04.628347 1017493 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:39:04.628356 1017493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:39:04.628407 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:39:04.662409 1017493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
	I1002 20:39:04.672335 1017493 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:39:04.672347 1017493 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:39:04.672412 1017493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:39:04.700986 1017493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
	I1002 20:39:04.799296 1017493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:39:04.842722 1017493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:39:04.854352 1017493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:39:05.672180 1017493 node_ready.go:35] waiting up to 6m0s for node "functional-850296" to be "Ready" ...
	I1002 20:39:05.678026 1017493 node_ready.go:49] node "functional-850296" is "Ready"
	I1002 20:39:05.678061 1017493 node_ready.go:38] duration metric: took 5.864024ms for node "functional-850296" to be "Ready" ...
	I1002 20:39:05.678100 1017493 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:39:05.678167 1017493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:39:05.686117 1017493 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 20:39:05.689020 1017493 addons.go:514] duration metric: took 1.091522007s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 20:39:05.691809 1017493 api_server.go:72] duration metric: took 1.094583173s to wait for apiserver process to appear ...
	I1002 20:39:05.691819 1017493 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:39:05.691836 1017493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 20:39:05.701339 1017493 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 20:39:05.702409 1017493 api_server.go:141] control plane version: v1.34.1
	I1002 20:39:05.702421 1017493 api_server.go:131] duration metric: took 10.596048ms to wait for apiserver health ...
	I1002 20:39:05.702428 1017493 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:39:05.706004 1017493 system_pods.go:59] 8 kube-system pods found
	I1002 20:39:05.706021 1017493 system_pods.go:61] "coredns-66bc5c9577-j9sfw" [22774fdc-2372-4f6c-a311-f87f3c1da6b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:39:05.706028 1017493 system_pods.go:61] "etcd-functional-850296" [2deb9442-175c-4270-a03a-3fe07c68d042] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:39:05.706065 1017493 system_pods.go:61] "kindnet-hzdd7" [d54e1d14-95ee-4a95-93eb-9cb09d7fa4c7] Running
	I1002 20:39:05.706078 1017493 system_pods.go:61] "kube-apiserver-functional-850296" [27680d93-7739-4544-ad2c-7916686d9754] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:39:05.706088 1017493 system_pods.go:61] "kube-controller-manager-functional-850296" [c60690e8-4407-4308-8cad-74366a17a41c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:39:05.706092 1017493 system_pods.go:61] "kube-proxy-jf4r2" [597f79f9-4539-419c-bc29-29bd6f37112f] Running
	I1002 20:39:05.706097 1017493 system_pods.go:61] "kube-scheduler-functional-850296" [e8e589ab-dd32-46d2-8c98-8b4759d57a5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:39:05.706100 1017493 system_pods.go:61] "storage-provisioner" [4923d052-4ebe-4955-9e43-6337406d02fa] Running
	I1002 20:39:05.706104 1017493 system_pods.go:74] duration metric: took 3.671735ms to wait for pod list to return data ...
	I1002 20:39:05.706110 1017493 default_sa.go:34] waiting for default service account to be created ...
	I1002 20:39:05.708636 1017493 default_sa.go:45] found service account: "default"
	I1002 20:39:05.708647 1017493 default_sa.go:55] duration metric: took 2.532638ms for default service account to be created ...
	I1002 20:39:05.708653 1017493 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 20:39:05.711630 1017493 system_pods.go:86] 8 kube-system pods found
	I1002 20:39:05.711648 1017493 system_pods.go:89] "coredns-66bc5c9577-j9sfw" [22774fdc-2372-4f6c-a311-f87f3c1da6b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:39:05.711655 1017493 system_pods.go:89] "etcd-functional-850296" [2deb9442-175c-4270-a03a-3fe07c68d042] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:39:05.711660 1017493 system_pods.go:89] "kindnet-hzdd7" [d54e1d14-95ee-4a95-93eb-9cb09d7fa4c7] Running
	I1002 20:39:05.711667 1017493 system_pods.go:89] "kube-apiserver-functional-850296" [27680d93-7739-4544-ad2c-7916686d9754] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:39:05.711673 1017493 system_pods.go:89] "kube-controller-manager-functional-850296" [c60690e8-4407-4308-8cad-74366a17a41c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:39:05.711676 1017493 system_pods.go:89] "kube-proxy-jf4r2" [597f79f9-4539-419c-bc29-29bd6f37112f] Running
	I1002 20:39:05.711682 1017493 system_pods.go:89] "kube-scheduler-functional-850296" [e8e589ab-dd32-46d2-8c98-8b4759d57a5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:39:05.711685 1017493 system_pods.go:89] "storage-provisioner" [4923d052-4ebe-4955-9e43-6337406d02fa] Running
	I1002 20:39:05.711690 1017493 system_pods.go:126] duration metric: took 3.033129ms to wait for k8s-apps to be running ...
	I1002 20:39:05.711696 1017493 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:39:05.711752 1017493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:39:05.725920 1017493 system_svc.go:56] duration metric: took 14.214114ms WaitForService to wait for kubelet
	I1002 20:39:05.725937 1017493 kubeadm.go:586] duration metric: took 1.12871452s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:39:05.725953 1017493 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:39:05.735506 1017493 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 20:39:05.735522 1017493 node_conditions.go:123] node cpu capacity is 2
	I1002 20:39:05.735532 1017493 node_conditions.go:105] duration metric: took 9.575037ms to run NodePressure ...
	I1002 20:39:05.735579 1017493 start.go:242] waiting for startup goroutines ...
	I1002 20:39:05.735587 1017493 start.go:247] waiting for cluster config update ...
	I1002 20:39:05.735597 1017493 start.go:256] writing updated cluster config ...
	I1002 20:39:05.735875 1017493 ssh_runner.go:195] Run: rm -f paused
	I1002 20:39:05.740011 1017493 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:39:05.805703 1017493 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j9sfw" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 20:39:07.811503 1017493 pod_ready.go:104] pod "coredns-66bc5c9577-j9sfw" is not "Ready", error: <nil>
	W1002 20:39:10.311824 1017493 pod_ready.go:104] pod "coredns-66bc5c9577-j9sfw" is not "Ready", error: <nil>
	W1002 20:39:12.313123 1017493 pod_ready.go:104] pod "coredns-66bc5c9577-j9sfw" is not "Ready", error: <nil>
	I1002 20:39:12.819279 1017493 pod_ready.go:94] pod "coredns-66bc5c9577-j9sfw" is "Ready"
	I1002 20:39:12.819294 1017493 pod_ready.go:86] duration metric: took 7.013577062s for pod "coredns-66bc5c9577-j9sfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:12.826608 1017493 pod_ready.go:83] waiting for pod "etcd-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:12.839600 1017493 pod_ready.go:94] pod "etcd-functional-850296" is "Ready"
	I1002 20:39:12.839623 1017493 pod_ready.go:86] duration metric: took 12.999729ms for pod "etcd-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:12.852169 1017493 pod_ready.go:83] waiting for pod "kube-apiserver-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:13.857442 1017493 pod_ready.go:94] pod "kube-apiserver-functional-850296" is "Ready"
	I1002 20:39:13.857456 1017493 pod_ready.go:86] duration metric: took 1.005274006s for pod "kube-apiserver-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:13.859670 1017493 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:13.864111 1017493 pod_ready.go:94] pod "kube-controller-manager-functional-850296" is "Ready"
	I1002 20:39:13.864124 1017493 pod_ready.go:86] duration metric: took 4.443014ms for pod "kube-controller-manager-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:14.009109 1017493 pod_ready.go:83] waiting for pod "kube-proxy-jf4r2" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:14.409456 1017493 pod_ready.go:94] pod "kube-proxy-jf4r2" is "Ready"
	I1002 20:39:14.409470 1017493 pod_ready.go:86] duration metric: took 400.346955ms for pod "kube-proxy-jf4r2" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:14.609727 1017493 pod_ready.go:83] waiting for pod "kube-scheduler-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:16.615190 1017493 pod_ready.go:94] pod "kube-scheduler-functional-850296" is "Ready"
	I1002 20:39:16.615203 1017493 pod_ready.go:86] duration metric: took 2.005463904s for pod "kube-scheduler-functional-850296" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:39:16.615217 1017493 pod_ready.go:40] duration metric: took 10.875180818s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:39:16.672078 1017493 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 20:39:16.677291 1017493 out.go:179] * Done! kubectl is now configured to use "functional-850296" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 20:39:56 functional-850296 crio[3524]: time="2025-10-02T20:39:56.733218112Z" level=info msg="Stopping pod sandbox: c8f1494ceb2777e417b9f6999ccb1a34883e3d1c18c7e5e849ea64043b10a648" id=881d23d6-2ded-421b-b4ff-f8e2ad391cae name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 20:39:56 functional-850296 crio[3524]: time="2025-10-02T20:39:56.733268474Z" level=info msg="Stopped pod sandbox (already stopped): c8f1494ceb2777e417b9f6999ccb1a34883e3d1c18c7e5e849ea64043b10a648" id=881d23d6-2ded-421b-b4ff-f8e2ad391cae name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 20:39:56 functional-850296 crio[3524]: time="2025-10-02T20:39:56.733629819Z" level=info msg="Removing pod sandbox: c8f1494ceb2777e417b9f6999ccb1a34883e3d1c18c7e5e849ea64043b10a648" id=64f9dcc1-4a52-443e-ab4c-49b596df8210 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 20:39:56 functional-850296 crio[3524]: time="2025-10-02T20:39:56.737557022Z" level=info msg="Removed pod sandbox: c8f1494ceb2777e417b9f6999ccb1a34883e3d1c18c7e5e849ea64043b10a648" id=64f9dcc1-4a52-443e-ab4c-49b596df8210 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 20:39:56 functional-850296 crio[3524]: time="2025-10-02T20:39:56.904831265Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 02 20:40:27 functional-850296 crio[3524]: time="2025-10-02T20:40:27.188843773Z" level=info msg="Pulling image: docker.io/nginx:latest" id=c917564c-229c-43e2-b88f-c74a22bdeac4 name=/runtime.v1.ImageService/PullImage
	Oct 02 20:40:27 functional-850296 crio[3524]: time="2025-10-02T20:40:27.191503939Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 02 20:40:28 functional-850296 crio[3524]: time="2025-10-02T20:40:28.071939233Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=911eda52-d2ef-4af1-abb9-bc357e5fc93f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:40:28 functional-850296 crio[3524]: time="2025-10-02T20:40:28.072098474Z" level=info msg="Image docker.io/nginx:alpine not found" id=911eda52-d2ef-4af1-abb9-bc357e5fc93f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:40:28 functional-850296 crio[3524]: time="2025-10-02T20:40:28.072139703Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=911eda52-d2ef-4af1-abb9-bc357e5fc93f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:40:42 functional-850296 crio[3524]: time="2025-10-02T20:40:42.746705055Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=24f640e9-d7ab-4a7b-ade7-8504de6f08bb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:40:42 functional-850296 crio[3524]: time="2025-10-02T20:40:42.746861908Z" level=info msg="Image docker.io/nginx:alpine not found" id=24f640e9-d7ab-4a7b-ade7-8504de6f08bb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:40:42 functional-850296 crio[3524]: time="2025-10-02T20:40:42.746936598Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=24f640e9-d7ab-4a7b-ade7-8504de6f08bb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:41:03 functional-850296 crio[3524]: time="2025-10-02T20:41:03.652070439Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 02 20:41:33 functional-850296 crio[3524]: time="2025-10-02T20:41:33.917061824Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=09cf870f-b74a-4da0-b804-b547b0ed8ba5 name=/runtime.v1.ImageService/PullImage
	Oct 02 20:41:33 functional-850296 crio[3524]: time="2025-10-02T20:41:33.919338279Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 02 20:42:04 functional-850296 crio[3524]: time="2025-10-02T20:42:04.19572234Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 02 20:42:34 functional-850296 crio[3524]: time="2025-10-02T20:42:34.463533822Z" level=info msg="Pulling image: docker.io/nginx:latest" id=d77d0de9-a2e4-4ad6-9e6d-6fc6b8272273 name=/runtime.v1.ImageService/PullImage
	Oct 02 20:42:34 functional-850296 crio[3524]: time="2025-10-02T20:42:34.466420149Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 02 20:42:47 functional-850296 crio[3524]: time="2025-10-02T20:42:47.745324321Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=88fa8e22-a4d3-4dd4-94d6-711dd6c1d612 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:42:47 functional-850296 crio[3524]: time="2025-10-02T20:42:47.745460523Z" level=info msg="Image docker.io/nginx:alpine not found" id=88fa8e22-a4d3-4dd4-94d6-711dd6c1d612 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:42:47 functional-850296 crio[3524]: time="2025-10-02T20:42:47.745498659Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=88fa8e22-a4d3-4dd4-94d6-711dd6c1d612 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:43:02 functional-850296 crio[3524]: time="2025-10-02T20:43:02.746789538Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=89786e1c-2315-4d9c-8a82-c2e407fb8833 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:43:02 functional-850296 crio[3524]: time="2025-10-02T20:43:02.747148962Z" level=info msg="Image docker.io/nginx:alpine not found" id=89786e1c-2315-4d9c-8a82-c2e407fb8833 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:43:02 functional-850296 crio[3524]: time="2025-10-02T20:43:02.747212198Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=89786e1c-2315-4d9c-8a82-c2e407fb8833 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8dfb5e0595e07       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 minutes ago       Running             kube-proxy                2                   881e8e9f1e876       kube-proxy-jf4r2                            kube-system
	c6de90c680ce5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 minutes ago       Running             kindnet-cni               2                   9863cec15bb5a       kindnet-hzdd7                               kube-system
	991a81471c245       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   2                   137e0483bc7eb       coredns-66bc5c9577-j9sfw                    kube-system
	28b49cfffc635       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Running             storage-provisioner       2                   50523e12462aa       storage-provisioner                         kube-system
	275e899f52009       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   4 minutes ago       Running             kube-apiserver            0                   88c931ebcfb5f       kube-apiserver-functional-850296            kube-system
	27f03308c1942       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   4 minutes ago       Running             kube-controller-manager   2                   c8908d405c069       kube-controller-manager-functional-850296   kube-system
	1d131e04547ed       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   4 minutes ago       Running             kube-scheduler            2                   57e9f61b27dc9       kube-scheduler-functional-850296            kube-system
	7d406b360d906       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   4 minutes ago       Running             etcd                      2                   827db98da488f       etcd-functional-850296                      kube-system
	6d1248452ad29       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   5 minutes ago       Exited              etcd                      1                   827db98da488f       etcd-functional-850296                      kube-system
	4c2d0d935a5a3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   5 minutes ago       Exited              kube-controller-manager   1                   c8908d405c069       kube-controller-manager-functional-850296   kube-system
	57a5c63b7515c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   5 minutes ago       Exited              kube-scheduler            1                   57e9f61b27dc9       kube-scheduler-functional-850296            kube-system
	cdb96f1a50245       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Exited              storage-provisioner       1                   50523e12462aa       storage-provisioner                         kube-system
	7878706c55ce3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 minutes ago       Exited              kindnet-cni               1                   9863cec15bb5a       kindnet-hzdd7                               kube-system
	c9663fe1dfee7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Exited              coredns                   1                   137e0483bc7eb       coredns-66bc5c9577-j9sfw                    kube-system
	a795ea3c6cfd9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 minutes ago       Exited              kube-proxy                1                   881e8e9f1e876       kube-proxy-jf4r2                            kube-system
	
	
	==> coredns [991a81471c2453c500385f0a6c23bee980c37e0e4eee80f00f13b4914c9ba5de] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52081 - 62515 "HINFO IN 8732729395583003918.4849294333637737484. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003943884s
	
	
	==> coredns [c9663fe1dfee7d57f6b8c7bd72b81a70c5afcc4aa55c9450e671cea65a3c06e1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35798 - 45296 "HINFO IN 1292503344635988855.3549566320544195153. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013336221s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-850296
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-850296
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=functional-850296
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_37_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:37:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-850296
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:43:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:42:37 +0000   Thu, 02 Oct 2025 20:37:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:42:37 +0000   Thu, 02 Oct 2025 20:37:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:42:37 +0000   Thu, 02 Oct 2025 20:37:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:42:37 +0000   Thu, 02 Oct 2025 20:38:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-850296
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 76773bab76c446648979f56596eaecff
	  System UUID:                d0defe04-ab05-4998-9efd-4465d0254c4c
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 coredns-66bc5c9577-j9sfw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m13s
	  kube-system                 etcd-functional-850296                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m18s
	  kube-system                 kindnet-hzdd7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m13s
	  kube-system                 kube-apiserver-functional-850296             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-controller-manager-functional-850296    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-proxy-jf4r2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-scheduler-functional-850296             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m12s                  kube-proxy       
	  Normal   Starting                 4m31s                  kube-proxy       
	  Normal   Starting                 5m13s                  kube-proxy       
	  Warning  CgroupV1                 6m19s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m19s                  kubelet          Node functional-850296 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m19s                  kubelet          Node functional-850296 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m19s                  kubelet          Node functional-850296 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m19s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           6m14s                  node-controller  Node functional-850296 event: Registered Node functional-850296 in Controller
	  Normal   NodeReady                5m32s                  kubelet          Node functional-850296 status is now: NodeReady
	  Normal   RegisteredNode           5m11s                  node-controller  Node functional-850296 event: Registered Node functional-850296 in Controller
	  Normal   NodeHasSufficientMemory  4m38s (x8 over 4m38s)  kubelet          Node functional-850296 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 4m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 4m38s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    4m38s (x8 over 4m38s)  kubelet          Node functional-850296 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m38s (x8 over 4m38s)  kubelet          Node functional-850296 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m29s                  node-controller  Node functional-850296 event: Registered Node functional-850296 in Controller
	
	
	==> dmesg <==
	[Oct 2 19:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 20:17] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 20:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 20:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 20:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6d1248452ad29e3224296cf66f6505e875f1159881235dedcbe9793d3c9f615e] <==
	{"level":"warn","ts":"2025-10-02T20:38:19.523649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.531322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.555740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.585706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.601943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.623228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:38:19.672824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54592","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:38:43.609768Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T20:38:43.609814Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-850296","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T20:38:43.609901Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T20:38:43.763689Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T20:38:43.763794Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:38:43.763817Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T20:38:43.763851Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-02T20:38:43.763891Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T20:38:43.763949Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T20:38:43.763985Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T20:38:43.763993Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T20:38:43.764095Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T20:38:43.764141Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T20:38:43.764174Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:38:43.767777Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T20:38:43.767868Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:38:43.767899Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T20:38:43.767905Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-850296","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [7d406b360d906cbd403e5610b39152427208b3006f82823e3a1bc43394a91391] <==
	{"level":"warn","ts":"2025-10-02T20:39:00.775126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.791801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.815324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.838989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.862714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.885092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.903937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.915243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.932238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.966616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:00.980154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.000818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.014976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.033024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.058714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.108726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.121077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.143692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.156964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.173133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.198478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.223054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.242213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.258901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:39:01.322142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51620","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:43:34 up  5:25,  0 user,  load average: 0.32, 0.67, 1.36
	Linux functional-850296 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7878706c55ce34583311ef4a87456c1d6d6e903f7330e166383956d89b187d93] <==
	I1002 20:38:15.626090       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 20:38:15.642339       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 20:38:15.642579       1 main.go:148] setting mtu 1500 for CNI 
	I1002 20:38:15.642605       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 20:38:15.642619       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T20:38:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 20:38:15.823304       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 20:38:15.823379       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 20:38:15.823411       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 20:38:15.826976       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 20:38:20.728679       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 20:38:20.728784       1 metrics.go:72] Registering metrics
	I1002 20:38:20.728874       1 controller.go:711] "Syncing nftables rules"
	I1002 20:38:25.826113       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:38:25.826235       1 main.go:301] handling current node
	I1002 20:38:35.823570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:38:35.823603       1 main.go:301] handling current node
	
	
	==> kindnet [c6de90c680ce5402050e16cb4f6e81ee97109c3bb463f7e3ffae85261344e670] <==
	I1002 20:41:33.527644       1 main.go:301] handling current node
	I1002 20:41:43.522203       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:41:43.522316       1 main.go:301] handling current node
	I1002 20:41:53.525717       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:41:53.525752       1 main.go:301] handling current node
	I1002 20:42:03.519346       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:42:03.519390       1 main.go:301] handling current node
	I1002 20:42:13.519189       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:42:13.519231       1 main.go:301] handling current node
	I1002 20:42:23.526559       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:42:23.526593       1 main.go:301] handling current node
	I1002 20:42:33.523805       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:42:33.523839       1 main.go:301] handling current node
	I1002 20:42:43.522721       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:42:43.522761       1 main.go:301] handling current node
	I1002 20:42:53.522134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:42:53.522172       1 main.go:301] handling current node
	I1002 20:43:03.519037       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:43:03.519069       1 main.go:301] handling current node
	I1002 20:43:13.522296       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:43:13.522334       1 main.go:301] handling current node
	I1002 20:43:23.525751       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:43:23.525787       1 main.go:301] handling current node
	I1002 20:43:33.526188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 20:43:33.526223       1 main.go:301] handling current node
	
	
	==> kube-apiserver [275e899f5200905471afcb9d9b210a0463a726a93b579fb14dc43c0cfc487a07] <==
	I1002 20:39:02.070123       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 20:39:02.076902       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 20:39:02.077703       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 20:39:02.082124       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 20:39:02.082274       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 20:39:02.082870       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 20:39:02.083346       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 20:39:02.083366       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 20:39:02.083449       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 20:39:02.100397       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 20:39:02.102318       1 policy_source.go:240] refreshing policies
	E1002 20:39:02.106557       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 20:39:02.117200       1 cache.go:39] Caches are synced for autoregister controller
	I1002 20:39:02.133480       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 20:39:02.764397       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 20:39:02.886483       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 20:39:04.310121       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 20:39:04.441156       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 20:39:04.513030       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 20:39:04.523147       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 20:39:05.581888       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 20:39:05.733846       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 20:39:05.783096       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 20:39:20.000215       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.221.12"}
	I1002 20:39:26.213086       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.136.83"}
	
	
	==> kube-controller-manager [27f03308c19421d82964512a8f4396955b6f0220780d0d43a730552eb475fd76] <==
	I1002 20:39:05.452225       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 20:39:05.452254       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 20:39:05.452281       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 20:39:05.452416       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 20:39:05.452504       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 20:39:05.452600       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 20:39:05.461148       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 20:39:05.459277       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 20:39:05.459325       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:39:05.460306       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 20:39:05.461653       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 20:39:05.461709       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-850296"
	I1002 20:39:05.461789       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 20:39:05.459344       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 20:39:05.460325       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 20:39:05.460562       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 20:39:05.475118       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 20:39:05.476407       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 20:39:05.478941       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 20:39:05.479016       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 20:39:05.485126       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 20:39:05.495376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:39:05.507627       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:39:05.507651       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 20:39:05.507659       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [4c2d0d935a5a32b4e66e39fadf67bd101aafc061bb3de8e074c2c81f0fc0f3f5] <==
	I1002 20:38:23.949252       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 20:38:23.950781       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 20:38:23.955937       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 20:38:23.963251       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:38:23.963276       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 20:38:23.963284       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 20:38:23.972900       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 20:38:23.975307       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 20:38:23.976452       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 20:38:23.976559       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 20:38:23.976601       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 20:38:23.976579       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 20:38:23.976716       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 20:38:23.976803       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 20:38:23.976567       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 20:38:23.976591       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 20:38:23.977108       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 20:38:23.977593       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-850296"
	I1002 20:38:23.977647       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 20:38:23.985638       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 20:38:23.985687       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 20:38:23.985706       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 20:38:23.985711       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 20:38:23.985717       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 20:38:23.990173       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8dfb5e0595e0720813e66577e5555d958f1259cee1c6366fa3f443e2b14c0ae1] <==
	I1002 20:39:03.240087       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:39:03.376638       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:39:03.478249       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:39:03.478356       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:39:03.478468       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:39:03.497236       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:39:03.497288       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:39:03.501346       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:39:03.501743       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:39:03.501818       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:39:03.505948       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:39:03.506161       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:39:03.506197       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:39:03.506932       1 config.go:309] "Starting node config controller"
	I1002 20:39:03.506952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:39:03.506959       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:39:03.507525       1 config.go:200] "Starting service config controller"
	I1002 20:39:03.507544       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:39:03.506028       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:39:03.609852       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:39:03.609911       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 20:39:03.611217       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a795ea3c6cfd97f06b7f92521162a0989d1af0abdd38b203d6a33e500b3e7d09] <==
	I1002 20:38:18.258892       1 server_linux.go:53] "Using iptables proxy"
	I1002 20:38:18.975793       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:38:20.801120       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:38:20.808394       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 20:38:20.821259       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:38:21.048413       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 20:38:21.048537       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:38:21.070806       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:38:21.071210       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:38:21.071227       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:38:21.072703       1 config.go:200] "Starting service config controller"
	I1002 20:38:21.072770       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:38:21.072820       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:38:21.079579       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:38:21.079688       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 20:38:21.076958       1 config.go:309] "Starting node config controller"
	I1002 20:38:21.079781       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:38:21.079810       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:38:21.075789       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:38:21.079890       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:38:21.079926       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 20:38:21.173900       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1d131e04547edd912ef6b1b2a69a2e3c509e8bd119fdbc1e1e5e804ca19c5da5] <==
	I1002 20:39:00.118670       1 serving.go:386] Generated self-signed cert in-memory
	W1002 20:39:02.038429       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 20:39:02.038553       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 20:39:02.038590       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 20:39:02.038640       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 20:39:02.071647       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 20:39:02.074053       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:39:02.076511       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:39:02.076614       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:39:02.077076       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 20:39:02.077148       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 20:39:02.178156       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [57a5c63b7515c8ddd3572dbb6f31d3c324ab7dff05913ef9a6085cecc8fbd5ea] <==
	I1002 20:38:18.329695       1 serving.go:386] Generated self-signed cert in-memory
	W1002 20:38:20.519064       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 20:38:20.519089       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 20:38:20.519099       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 20:38:20.519118       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 20:38:20.630871       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 20:38:20.630903       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:38:20.641277       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 20:38:20.654143       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:38:20.658183       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:38:20.658269       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 20:38:20.760895       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:38:43.615409       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 20:38:43.615431       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 20:38:43.615453       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 20:38:43.615508       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:38:43.615682       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 20:38:43.615697       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 20:39:23 functional-850296 kubelet[3848]: I1002 20:39:23.585146    3848 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8pqm\" (UniqueName: \"kubernetes.io/projected/55d24b82-176e-41c0-a424-143c25cbb7b2-kube-api-access-t8pqm\") pod \"55d24b82-176e-41c0-a424-143c25cbb7b2\" (UID: \"55d24b82-176e-41c0-a424-143c25cbb7b2\") "
	Oct 02 20:39:23 functional-850296 kubelet[3848]: I1002 20:39:23.589669    3848 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d24b82-176e-41c0-a424-143c25cbb7b2-kube-api-access-t8pqm" (OuterVolumeSpecName: "kube-api-access-t8pqm") pod "55d24b82-176e-41c0-a424-143c25cbb7b2" (UID: "55d24b82-176e-41c0-a424-143c25cbb7b2"). InnerVolumeSpecName "kube-api-access-t8pqm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 02 20:39:23 functional-850296 kubelet[3848]: I1002 20:39:23.686537    3848 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t8pqm\" (UniqueName: \"kubernetes.io/projected/55d24b82-176e-41c0-a424-143c25cbb7b2-kube-api-access-t8pqm\") on node \"functional-850296\" DevicePath \"\""
	Oct 02 20:39:24 functional-850296 kubelet[3848]: I1002 20:39:24.746839    3848 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55d24b82-176e-41c0-a424-143c25cbb7b2" path="/var/lib/kubelet/pods/55d24b82-176e-41c0-a424-143c25cbb7b2/volumes"
	Oct 02 20:39:26 functional-850296 kubelet[3848]: I1002 20:39:26.210232    3848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rxbd\" (UniqueName: \"kubernetes.io/projected/ad429109-a08a-40c0-b1c2-ee8a50977fb1-kube-api-access-8rxbd\") pod \"nginx-svc\" (UID: \"ad429109-a08a-40c0-b1c2-ee8a50977fb1\") " pod="default/nginx-svc"
	Oct 02 20:39:26 functional-850296 kubelet[3848]: W1002 20:39:26.505574    3848 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b3320f49b45095e0cd8ffc9d81927739aab1fb0aa10d6d93cb1d37a17c522ddc/crio-ff2a55ab2022b3244e05c57a59f6679fc3b62e76775e388faacd4bbda7a439e6 WatchSource:0}: Error finding container ff2a55ab2022b3244e05c57a59f6679fc3b62e76775e388faacd4bbda7a439e6: Status 404 returned error can't find the container with id ff2a55ab2022b3244e05c57a59f6679fc3b62e76775e388faacd4bbda7a439e6
	Oct 02 20:39:32 functional-850296 kubelet[3848]: I1002 20:39:32.856164    3848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4br99\" (UniqueName: \"kubernetes.io/projected/3444e5e8-03bd-4963-9c74-f52b3adfa223-kube-api-access-4br99\") pod \"sp-pod\" (UID: \"3444e5e8-03bd-4963-9c74-f52b3adfa223\") " pod="default/sp-pod"
	Oct 02 20:39:32 functional-850296 kubelet[3848]: I1002 20:39:32.856224    3848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5a161874-11df-461b-a845-3b6dbdaccb70\" (UniqueName: \"kubernetes.io/host-path/3444e5e8-03bd-4963-9c74-f52b3adfa223-pvc-5a161874-11df-461b-a845-3b6dbdaccb70\") pod \"sp-pod\" (UID: \"3444e5e8-03bd-4963-9c74-f52b3adfa223\") " pod="default/sp-pod"
	Oct 02 20:39:56 functional-850296 kubelet[3848]: E1002 20:39:56.785276    3848 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c2b9fc722dde414b54e9c4ad0ce50a384da528a7cb758526d9b888b7c128f630/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c2b9fc722dde414b54e9c4ad0ce50a384da528a7cb758526d9b888b7c128f630/diff: no such file or directory, extraDiskErr: <nil>
	Oct 02 20:39:56 functional-850296 kubelet[3848]: E1002 20:39:56.810285    3848 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1d585b4bcd3eab7f21d40ae12dbdd4ba35edb1b7ee9141dcc7c1cad8ad33d5d1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1d585b4bcd3eab7f21d40ae12dbdd4ba35edb1b7ee9141dcc7c1cad8ad33d5d1/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-apiserver-functional-850296_7b862c6637830d9438fb7385f4944a85/kube-apiserver/1.log" to get inode usage: stat /var/log/pods/kube-system_kube-apiserver-functional-850296_7b862c6637830d9438fb7385f4944a85/kube-apiserver/1.log: no such file or directory
	Oct 02 20:40:27 functional-850296 kubelet[3848]: E1002 20:40:27.188452    3848 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 02 20:40:27 functional-850296 kubelet[3848]: E1002 20:40:27.188522    3848 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 02 20:40:27 functional-850296 kubelet[3848]: E1002 20:40:27.189878    3848 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(ad429109-a08a-40c0-b1c2-ee8a50977fb1): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:40:27 functional-850296 kubelet[3848]: E1002 20:40:27.189943    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ad429109-a08a-40c0-b1c2-ee8a50977fb1"
	Oct 02 20:40:28 functional-850296 kubelet[3848]: E1002 20:40:28.072567    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ad429109-a08a-40c0-b1c2-ee8a50977fb1"
	Oct 02 20:41:33 functional-850296 kubelet[3848]: E1002 20:41:33.916667    3848 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:41:33 functional-850296 kubelet[3848]: E1002 20:41:33.916729    3848 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 20:41:33 functional-850296 kubelet[3848]: E1002 20:41:33.917969    3848 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(3444e5e8-03bd-4963-9c74-f52b3adfa223): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:41:33 functional-850296 kubelet[3848]: E1002 20:41:33.918020    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	Oct 02 20:41:34 functional-850296 kubelet[3848]: E1002 20:41:34.232217    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="3444e5e8-03bd-4963-9c74-f52b3adfa223"
	Oct 02 20:42:34 functional-850296 kubelet[3848]: E1002 20:42:34.462623    3848 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 02 20:42:34 functional-850296 kubelet[3848]: E1002 20:42:34.463148    3848 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 02 20:42:34 functional-850296 kubelet[3848]: E1002 20:42:34.463449    3848 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(ad429109-a08a-40c0-b1c2-ee8a50977fb1): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 20:42:34 functional-850296 kubelet[3848]: E1002 20:42:34.464543    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ad429109-a08a-40c0-b1c2-ee8a50977fb1"
	Oct 02 20:42:47 functional-850296 kubelet[3848]: E1002 20:42:47.745918    3848 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="ad429109-a08a-40c0-b1c2-ee8a50977fb1"
	
	
	==> storage-provisioner [28b49cfffc6351da29c7557ee872755ca084db930b14770b1ba25cf3d451dfe7] <==
	W1002 20:43:09.652808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:11.655507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:11.659931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:13.663431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:13.670172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:15.673834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:15.678525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:17.682016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:17.686750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:19.690099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:19.695267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:21.698715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:21.705553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:23.708568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:23.712830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:25.716523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:25.722479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:27.725081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:27.732128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:29.735409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:29.740035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:31.742469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:31.748898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:33.752121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:43:33.756868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cdb96f1a50245e85028f34a2bd241e7e6b08bf2bce15ce95f6bfafc37d115ecf] <==
	I1002 20:38:15.999486       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 20:38:20.874431       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 20:38:20.874559       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 20:38:20.899722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:24.365016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:28.625466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:32.223997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:35.277427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:38.300024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:38.305158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 20:38:38.305311       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 20:38:38.305677       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99de5c4e-838e-4677-b696-969817484c14", APIVersion:"v1", ResourceVersion:"530", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-850296_5e7e7349-27d6-469b-9048-22c092eda1c6 became leader
	I1002 20:38:38.305708       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-850296_5e7e7349-27d6-469b-9048-22c092eda1c6!
	W1002 20:38:38.307641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:38.316877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 20:38:38.406699       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-850296_5e7e7349-27d6-469b-9048-22c092eda1c6!
	W1002 20:38:40.319248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:40.326803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:42.337454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:38:42.346693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-850296 -n functional-850296
helpers_test.go:269: (dbg) Run:  kubectl --context functional-850296 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-850296 describe pod nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-850296 describe pod nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-850296/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:39:26 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8rxbd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8rxbd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  4m9s                default-scheduler  Successfully assigned default/nginx-svc to functional-850296
	  Warning  Failed     3m8s                kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     61s (x2 over 3m8s)  kubelet            Error: ErrImagePull
	  Warning  Failed     61s                 kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    48s (x2 over 3m7s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     48s (x2 over 3m7s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    33s (x3 over 4m9s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-850296/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 20:39:32 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4br99 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-4br99:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-850296
	  Warning  Failed     2m2s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m2s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    2m1s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m1s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    110s (x2 over 4m2s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (249.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (241s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-850296 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [ad429109-a08a-40c0-b1c2-ee8a50977fb1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-850296 -n functional-850296
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-10-02 20:43:26.552539521 +0000 UTC m=+1522.652476010
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-850296 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-850296 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-850296/192.168.49.2
Start Time:       Thu, 02 Oct 2025 20:39:26 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:  10.244.0.4
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8rxbd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8rxbd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-850296
Warning  Failed     2m59s                kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     52s (x2 over 2m59s)  kubelet            Error: ErrImagePull
Warning  Failed     52s                  kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    39s (x2 over 2m58s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     39s (x2 over 2m58s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    24s (x3 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-850296 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-850296 logs nginx-svc -n default: exit status 1 (99.857075ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-850296 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (241.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (93.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1002 20:43:26.742007  993954 retry.go:31] will retry after 3.242514437s: Temporary Error: Get "http:": http: no Host in request URL
I1002 20:43:29.985287  993954 retry.go:31] will retry after 3.800444844s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-850296 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.102.136.83   10.102.136.83   80:32553/TCP   5m33s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (93.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (601.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-850296 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-850296 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-bqdjf" [1944770c-61a2-4381-867a-98a7fe0db025] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1002 20:46:34.539840  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:51:34.539631  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:52:57.603017  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-850296 -n functional-850296
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-02 20:55:01.204468381 +0000 UTC m=+2217.304404870
functional_test.go:1460: (dbg) Run:  kubectl --context functional-850296 describe po hello-node-75c85bcc94-bqdjf -n default
functional_test.go:1460: (dbg) kubectl --context functional-850296 describe po hello-node-75c85bcc94-bqdjf -n default:
Name:             hello-node-75c85bcc94-bqdjf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-850296/192.168.49.2
Start Time:       Thu, 02 Oct 2025 20:45:00 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8cxcc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8cxcc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bqdjf to functional-850296
Normal   Pulling    3m46s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     3m43s (x5 over 8m49s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     3m43s (x5 over 8m49s)   kubelet            Error: ErrImagePull
Warning  Failed     2m21s (x16 over 8m49s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    71s (x21 over 8m49s)    kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-850296 logs hello-node-75c85bcc94-bqdjf -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-850296 logs hello-node-75c85bcc94-bqdjf -n default: exit status 1 (94.575858ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-bqdjf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-850296 logs hello-node-75c85bcc94-bqdjf -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (601.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-850296 service --namespace=default --https --url hello-node: exit status 115 (391.55632ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31543
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-850296 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-850296 service hello-node --url --format={{.IP}}: exit status 115 (388.369432ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-850296 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-850296 service hello-node --url: exit status 115 (387.125ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31543
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-850296 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31543
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image load --daemon kicbase/echo-server:functional-850296 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-850296" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image load --daemon kicbase/echo-server:functional-850296 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-850296" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-850296
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image load --daemon kicbase/echo-server:functional-850296 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-850296" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image save kicbase/echo-server:functional-850296 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1002 20:55:13.308339 1026036 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:55:13.309162 1026036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:55:13.309178 1026036 out.go:374] Setting ErrFile to fd 2...
	I1002 20:55:13.309184 1026036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:55:13.309530 1026036 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:55:13.310467 1026036 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:55:13.310631 1026036 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:55:13.311140 1026036 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
	I1002 20:55:13.329225 1026036 ssh_runner.go:195] Run: systemctl --version
	I1002 20:55:13.329285 1026036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
	I1002 20:55:13.348575 1026036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
	I1002 20:55:13.444678 1026036 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1002 20:55:13.444738 1026036 cache_images.go:254] Failed to load cached images for "functional-850296": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1002 20:55:13.444758 1026036 cache_images.go:266] failed pushing to: functional-850296

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-850296
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image save --daemon kicbase/echo-server:functional-850296 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-850296
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-850296: exit status 1 (16.863135ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-850296

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-850296

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.35s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-003875 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-003875 --output=json --user=testUser: exit status 80 (2.354265616s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d7bfc5e0-2c6a-4587-9a88-0fd0202c0007","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-003875 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"646766ec-9fa9-4925-b3c8-ddf007017776","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-02T21:12:14Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"7a775746-6cfc-493b-b76b-97a82f748c10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-003875 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.35s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.99s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-003875 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-003875 --output=json --user=testUser: exit status 80 (1.985984097s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9875140f-15aa-4ff7-9fd5-2a181499916a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-003875 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"f23880c5-8130-4047-9ae2-41dba52be930","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-02T21:12:16Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"bdcd28db-4e14-4f49-b3cc-485204e199d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-003875 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.99s)

                                                
                                    
x
+
TestPreload (443.37s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-731213 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1002 21:24:25.746814  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-731213 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (58.783405748s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-731213 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-731213 image pull gcr.io/k8s-minikube/busybox: (2.099905869s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-731213
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-731213: (5.827376968s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-731213 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1002 21:26:17.605782  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:26:34.542471  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:29:25.746947  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:31:34.546088  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p test-preload-731213 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (6m12.231632603s)

                                                
                                                
-- stdout --
	* [test-preload-731213] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	* Using the docker driver based on existing profile
	* Starting "test-preload-731213" primary control-plane node in "test-preload-731213" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Downloading Kubernetes v1.32.0 preload ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:25:27.323784 1122246 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:25:27.324355 1122246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:25:27.324682 1122246 out.go:374] Setting ErrFile to fd 2...
	I1002 21:25:27.324713 1122246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:25:27.325007 1122246 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:25:27.325427 1122246 out.go:368] Setting JSON to false
	I1002 21:25:27.326338 1122246 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":22065,"bootTime":1759418263,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:25:27.326435 1122246 start.go:140] virtualization:  
	I1002 21:25:27.329447 1122246 out.go:179] * [test-preload-731213] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:25:27.333282 1122246 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:25:27.333479 1122246 notify.go:221] Checking for updates...
	I1002 21:25:27.339266 1122246 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:25:27.342096 1122246 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:25:27.344939 1122246 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:25:27.347792 1122246 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:25:27.350663 1122246 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:25:27.353950 1122246 config.go:182] Loaded profile config "test-preload-731213": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1002 21:25:27.357532 1122246 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1002 21:25:27.360316 1122246 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:25:27.392581 1122246 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:25:27.392707 1122246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:25:27.450898 1122246 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 21:25:27.441757791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:25:27.451014 1122246 docker.go:319] overlay module found
	I1002 21:25:27.454135 1122246 out.go:179] * Using the docker driver based on existing profile
	I1002 21:25:27.457026 1122246 start.go:306] selected driver: docker
	I1002 21:25:27.457042 1122246 start.go:936] validating driver "docker" against &{Name:test-preload-731213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-731213 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:25:27.457152 1122246 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:25:27.457916 1122246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:25:27.523119 1122246 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 21:25:27.514163519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:25:27.523557 1122246 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:25:27.523588 1122246 cni.go:84] Creating CNI manager for ""
	I1002 21:25:27.523651 1122246 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:25:27.523695 1122246 start.go:350] cluster config:
	{Name:test-preload-731213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-731213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:25:27.526866 1122246 out.go:179] * Starting "test-preload-731213" primary control-plane node in "test-preload-731213" cluster
	I1002 21:25:27.529642 1122246 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:25:27.532587 1122246 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:25:27.535414 1122246 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1002 21:25:27.535508 1122246 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:25:27.554717 1122246 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:25:27.554741 1122246 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:25:27.581462 1122246 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1002 21:25:27.581496 1122246 cache.go:59] Caching tarball of preloaded images
	I1002 21:25:27.581678 1122246 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1002 21:25:27.584743 1122246 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1002 21:25:27.587618 1122246 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1002 21:25:27.676371 1122246 preload.go:290] Got checksum from GCS API "d3dc3b83b826438926b7b91af837ed7b"
	I1002 21:25:27.676423 1122246 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:d3dc3b83b826438926b7b91af837ed7b -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1002 21:25:50.230201 1122246 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1002 21:25:50.230351 1122246 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/config.json ...
	I1002 21:25:50.230579 1122246 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:25:50.230611 1122246 start.go:361] acquireMachinesLock for test-preload-731213: {Name:mk170b5bf5ce354729553a340fce0dd91742257e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:25:50.230693 1122246 start.go:365] duration metric: took 54.702µs to acquireMachinesLock for "test-preload-731213"
	I1002 21:25:50.230711 1122246 start.go:97] Skipping create...Using existing machine configuration
	I1002 21:25:50.230717 1122246 fix.go:55] fixHost starting: 
	I1002 21:25:50.231002 1122246 cli_runner.go:164] Run: docker container inspect test-preload-731213 --format={{.State.Status}}
	I1002 21:25:50.252215 1122246 fix.go:113] recreateIfNeeded on test-preload-731213: state=Stopped err=<nil>
	W1002 21:25:50.252261 1122246 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 21:25:50.255469 1122246 out.go:252] * Restarting existing docker container for "test-preload-731213" ...
	I1002 21:25:50.255560 1122246 cli_runner.go:164] Run: docker start test-preload-731213
	I1002 21:25:50.517353 1122246 cli_runner.go:164] Run: docker container inspect test-preload-731213 --format={{.State.Status}}
	I1002 21:25:50.536841 1122246 kic.go:430] container "test-preload-731213" state is running.
	I1002 21:25:50.537230 1122246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-731213
	I1002 21:25:50.566274 1122246 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/config.json ...
	I1002 21:25:50.566902 1122246 machine.go:93] provisionDockerMachine start ...
	I1002 21:25:50.566969 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:50.588115 1122246 main.go:141] libmachine: Using SSH client type: native
	I1002 21:25:50.588455 1122246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34091 <nil> <nil>}
	I1002 21:25:50.588468 1122246 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:25:50.590182 1122246 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 21:25:53.721663 1122246 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-731213
	
	I1002 21:25:53.721759 1122246 ubuntu.go:182] provisioning hostname "test-preload-731213"
	I1002 21:25:53.721861 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:53.739182 1122246 main.go:141] libmachine: Using SSH client type: native
	I1002 21:25:53.739490 1122246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34091 <nil> <nil>}
	I1002 21:25:53.739505 1122246 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-731213 && echo "test-preload-731213" | sudo tee /etc/hostname
	I1002 21:25:53.879469 1122246 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-731213
	
	I1002 21:25:53.879606 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:53.897467 1122246 main.go:141] libmachine: Using SSH client type: native
	I1002 21:25:53.897790 1122246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34091 <nil> <nil>}
	I1002 21:25:53.897813 1122246 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-731213' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-731213/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-731213' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:25:54.030656 1122246 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:25:54.030684 1122246 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:25:54.030706 1122246 ubuntu.go:190] setting up certificates
	I1002 21:25:54.030716 1122246 provision.go:84] configureAuth start
	I1002 21:25:54.030779 1122246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-731213
	I1002 21:25:54.051461 1122246 provision.go:143] copyHostCerts
	I1002 21:25:54.051539 1122246 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:25:54.051561 1122246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:25:54.051644 1122246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:25:54.051759 1122246 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:25:54.051772 1122246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:25:54.051801 1122246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:25:54.051863 1122246 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:25:54.051871 1122246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:25:54.051895 1122246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:25:54.051944 1122246 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.test-preload-731213 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-731213]
	I1002 21:25:54.568261 1122246 provision.go:177] copyRemoteCerts
	I1002 21:25:54.568337 1122246 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:25:54.568378 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:54.585219 1122246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34091 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/test-preload-731213/id_rsa Username:docker}
	I1002 21:25:54.681995 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:25:54.700982 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1002 21:25:54.718163 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:25:54.735369 1122246 provision.go:87] duration metric: took 704.630404ms to configureAuth
	I1002 21:25:54.735402 1122246 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:25:54.735601 1122246 config.go:182] Loaded profile config "test-preload-731213": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1002 21:25:54.735711 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:54.752853 1122246 main.go:141] libmachine: Using SSH client type: native
	I1002 21:25:54.753173 1122246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34091 <nil> <nil>}
	I1002 21:25:54.753193 1122246 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:25:55.031129 1122246 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:25:55.031220 1122246 machine.go:96] duration metric: took 4.464302471s to provisionDockerMachine
	I1002 21:25:55.031247 1122246 start.go:294] postStartSetup for "test-preload-731213" (driver="docker")
	I1002 21:25:55.031286 1122246 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:25:55.031407 1122246 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:25:55.031473 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:55.055216 1122246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34091 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/test-preload-731213/id_rsa Username:docker}
	I1002 21:25:55.150458 1122246 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:25:55.153893 1122246 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:25:55.153920 1122246 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:25:55.153931 1122246 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:25:55.153991 1122246 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:25:55.154102 1122246 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:25:55.154206 1122246 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:25:55.162722 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:25:55.182441 1122246 start.go:297] duration metric: took 151.151436ms for postStartSetup
	I1002 21:25:55.182527 1122246 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:25:55.182576 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:55.200580 1122246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34091 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/test-preload-731213/id_rsa Username:docker}
	I1002 21:25:55.295120 1122246 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:25:55.299658 1122246 fix.go:57] duration metric: took 5.068932468s for fixHost
	I1002 21:25:55.299683 1122246 start.go:84] releasing machines lock for "test-preload-731213", held for 5.068977874s
	I1002 21:25:55.299760 1122246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-731213
	I1002 21:25:55.319429 1122246 ssh_runner.go:195] Run: cat /version.json
	I1002 21:25:55.319471 1122246 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:25:55.319483 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:55.319532 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:55.346429 1122246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34091 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/test-preload-731213/id_rsa Username:docker}
	I1002 21:25:55.349130 1122246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34091 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/test-preload-731213/id_rsa Username:docker}
	I1002 21:25:55.529505 1122246 ssh_runner.go:195] Run: systemctl --version
	I1002 21:25:55.535891 1122246 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:25:55.572265 1122246 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:25:55.576621 1122246 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:25:55.576693 1122246 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:25:55.584403 1122246 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:25:55.584425 1122246 start.go:496] detecting cgroup driver to use...
	I1002 21:25:55.584456 1122246 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:25:55.584504 1122246 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:25:55.599309 1122246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:25:55.612059 1122246 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:25:55.612132 1122246 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:25:55.627640 1122246 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:25:55.640507 1122246 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:25:55.745495 1122246 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:25:55.859050 1122246 docker.go:234] disabling docker service ...
	I1002 21:25:55.859130 1122246 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:25:55.874286 1122246 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:25:55.887113 1122246 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:25:56.006243 1122246 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:25:56.133868 1122246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:25:56.146727 1122246 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:25:56.161194 1122246 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1002 21:25:56.161260 1122246 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:25:56.170314 1122246 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:25:56.170465 1122246 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:25:56.179208 1122246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:25:56.187852 1122246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:25:56.196703 1122246 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:25:56.205155 1122246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:25:56.214558 1122246 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:25:56.223082 1122246 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:25:56.232122 1122246 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:25:56.239659 1122246 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:25:56.247121 1122246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:25:56.353590 1122246 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:25:56.477960 1122246 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:25:56.478122 1122246 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:25:56.482582 1122246 start.go:564] Will wait 60s for crictl version
	I1002 21:25:56.482729 1122246 ssh_runner.go:195] Run: which crictl
	I1002 21:25:56.486659 1122246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:25:56.512027 1122246 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:25:56.512115 1122246 ssh_runner.go:195] Run: crio --version
	I1002 21:25:56.545070 1122246 ssh_runner.go:195] Run: crio --version
	I1002 21:25:56.578187 1122246 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	I1002 21:25:56.579882 1122246 cli_runner.go:164] Run: docker network inspect test-preload-731213 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:25:56.598285 1122246 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 21:25:56.602509 1122246 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:25:56.612549 1122246 kubeadm.go:883] updating cluster {Name:test-preload-731213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-731213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:25:56.612664 1122246 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1002 21:25:56.612719 1122246 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:25:56.644946 1122246 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:25:56.644969 1122246 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:25:56.645032 1122246 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:25:56.672352 1122246 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:25:56.672379 1122246 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:25:56.672387 1122246 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1002 21:25:56.672484 1122246 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=test-preload-731213 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-731213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:25:56.672571 1122246 ssh_runner.go:195] Run: crio config
	I1002 21:25:56.741437 1122246 cni.go:84] Creating CNI manager for ""
	I1002 21:25:56.741462 1122246 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:25:56.741475 1122246 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:25:56.741521 1122246 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-731213 NodeName:test-preload-731213 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:25:56.741741 1122246 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-731213"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:25:56.741841 1122246 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1002 21:25:56.749454 1122246 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:25:56.749590 1122246 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:25:56.757242 1122246 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1002 21:25:56.770098 1122246 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:25:56.782753 1122246 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1002 21:25:56.795644 1122246 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:25:56.799054 1122246 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:25:56.808363 1122246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:25:56.914573 1122246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:25:56.929616 1122246 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213 for IP: 192.168.76.2
	I1002 21:25:56.929677 1122246 certs.go:195] generating shared ca certs ...
	I1002 21:25:56.929709 1122246 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:25:56.929864 1122246 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:25:56.929930 1122246 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:25:56.929953 1122246 certs.go:257] generating profile certs ...
	I1002 21:25:56.930174 1122246 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/client.key
	I1002 21:25:56.930283 1122246 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/apiserver.key.07a46334
	I1002 21:25:56.930355 1122246 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/proxy-client.key
	I1002 21:25:56.930485 1122246 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:25:56.930541 1122246 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:25:56.930584 1122246 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:25:56.930630 1122246 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:25:56.930682 1122246 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:25:56.930725 1122246 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:25:56.930794 1122246 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:25:56.931382 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:25:56.950570 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:25:56.969093 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:25:56.987780 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:25:57.008910 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1002 21:25:57.028436 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:25:57.047010 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:25:57.069387 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:25:57.095721 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:25:57.120390 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:25:57.143300 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:25:57.162433 1122246 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:25:57.175066 1122246 ssh_runner.go:195] Run: openssl version
	I1002 21:25:57.181043 1122246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:25:57.190020 1122246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:25:57.194781 1122246 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:25:57.194858 1122246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:25:57.236720 1122246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:25:57.244876 1122246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:25:57.253363 1122246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:25:57.257047 1122246 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:25:57.257108 1122246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:25:57.298872 1122246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:25:57.306654 1122246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:25:57.314771 1122246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:25:57.318580 1122246 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:25:57.318644 1122246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:25:57.360672 1122246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:25:57.368537 1122246 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:25:57.372277 1122246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:25:57.413000 1122246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:25:57.454361 1122246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:25:57.495452 1122246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:25:57.543863 1122246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:25:57.594163 1122246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:25:57.664027 1122246 kubeadm.go:400] StartCluster: {Name:test-preload-731213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-731213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:25:57.664154 1122246 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:25:57.664260 1122246 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:25:57.756496 1122246 cri.go:89] found id: "db921aa74c72ed857d6d93672188d934ac1dd700cff01d3ff816b86737f499e6"
	I1002 21:25:57.756522 1122246 cri.go:89] found id: "b86b5cbe0416fa59f31776431f594cc7379b987139f3e4f95f67a028888d2ce2"
	I1002 21:25:57.756528 1122246 cri.go:89] found id: "7547cdcf2123708b60377f5fcaa59e29beba6832d2d16a16a3a45be99c942ccd"
	I1002 21:25:57.756533 1122246 cri.go:89] found id: ""
	I1002 21:25:57.756604 1122246 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 21:25:57.782667 1122246 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:25:57Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:25:57.782752 1122246 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:25:57.796835 1122246 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:25:57.796870 1122246 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:25:57.796939 1122246 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:25:57.808145 1122246 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:25:57.808605 1122246 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-731213" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:25:57.808735 1122246 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-992084/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-731213" cluster setting kubeconfig missing "test-preload-731213" context setting]
	I1002 21:25:57.809051 1122246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:25:57.809682 1122246 kapi.go:59] client config for test-preload-731213: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/client.key", CAFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:25:57.810399 1122246 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 21:25:57.810430 1122246 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 21:25:57.810436 1122246 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 21:25:57.810441 1122246 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 21:25:57.810445 1122246 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 21:25:57.810795 1122246 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:25:57.822706 1122246 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 21:25:57.822753 1122246 kubeadm.go:601] duration metric: took 25.863417ms to restartPrimaryControlPlane
	I1002 21:25:57.822766 1122246 kubeadm.go:402] duration metric: took 158.747072ms to StartCluster
	I1002 21:25:57.822794 1122246 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:25:57.822873 1122246 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:25:57.823599 1122246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:25:57.823809 1122246 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:25:57.824132 1122246 config.go:182] Loaded profile config "test-preload-731213": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1002 21:25:57.824189 1122246 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:25:57.824323 1122246 addons.go:69] Setting storage-provisioner=true in profile "test-preload-731213"
	I1002 21:25:57.824343 1122246 addons.go:238] Setting addon storage-provisioner=true in "test-preload-731213"
	W1002 21:25:57.824349 1122246 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:25:57.824388 1122246 host.go:66] Checking if "test-preload-731213" exists ...
	I1002 21:25:57.824940 1122246 cli_runner.go:164] Run: docker container inspect test-preload-731213 --format={{.State.Status}}
	I1002 21:25:57.825195 1122246 addons.go:69] Setting default-storageclass=true in profile "test-preload-731213"
	I1002 21:25:57.825222 1122246 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-731213"
	I1002 21:25:57.825521 1122246 cli_runner.go:164] Run: docker container inspect test-preload-731213 --format={{.State.Status}}
	I1002 21:25:57.826380 1122246 out.go:179] * Verifying Kubernetes components...
	I1002 21:25:57.830289 1122246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:25:57.866567 1122246 kapi.go:59] client config for test-preload-731213: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/client.key", CAFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:25:57.866962 1122246 addons.go:238] Setting addon default-storageclass=true in "test-preload-731213"
	W1002 21:25:57.866975 1122246 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:25:57.867001 1122246 host.go:66] Checking if "test-preload-731213" exists ...
	I1002 21:25:57.867573 1122246 cli_runner.go:164] Run: docker container inspect test-preload-731213 --format={{.State.Status}}
	I1002 21:25:57.879772 1122246 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:25:57.880986 1122246 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:25:57.881018 1122246 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:25:57.881086 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:57.906350 1122246 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:25:57.906370 1122246 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:25:57.906431 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:57.930875 1122246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34091 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/test-preload-731213/id_rsa Username:docker}
	I1002 21:25:57.946252 1122246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34091 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/test-preload-731213/id_rsa Username:docker}
	I1002 21:25:58.123426 1122246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:25:58.135459 1122246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:25:58.195049 1122246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:26:02.241600 1122246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.118135137s)
	I1002 21:26:02.241665 1122246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.106181739s)
	I1002 21:26:02.242002 1122246 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.046927221s)
	I1002 21:26:02.242057 1122246 node_ready.go:35] waiting up to 6m0s for node "test-preload-731213" to be "Ready" ...
	I1002 21:26:02.270377 1122246 node_ready.go:49] node "test-preload-731213" is "Ready"
	I1002 21:26:02.270408 1122246 node_ready.go:38] duration metric: took 28.327879ms for node "test-preload-731213" to be "Ready" ...
	I1002 21:26:02.270420 1122246 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:26:02.270487 1122246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:26:02.285392 1122246 api_server.go:72] duration metric: took 4.461547803s to wait for apiserver process to appear ...
	I1002 21:26:02.285462 1122246 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:26:02.285495 1122246 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:26:02.294823 1122246 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:26:02.294851 1122246 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:26:02.297441 1122246 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 21:26:02.298545 1122246 addons.go:514] duration metric: took 4.474340992s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 21:26:02.785681 1122246 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:26:02.794935 1122246 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 21:26:02.795949 1122246 api_server.go:141] control plane version: v1.32.0
	I1002 21:26:02.795977 1122246 api_server.go:131] duration metric: took 510.494941ms to wait for apiserver health ...
	I1002 21:26:02.796006 1122246 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:26:02.800483 1122246 system_pods.go:59] 8 kube-system pods found
	I1002 21:26:02.800534 1122246 system_pods.go:61] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:02.800545 1122246 system_pods.go:61] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:02.800553 1122246 system_pods.go:61] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 21:26:02.800562 1122246 system_pods.go:61] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:02.800584 1122246 system_pods.go:61] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:02.800592 1122246 system_pods.go:61] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 21:26:02.800604 1122246 system_pods.go:61] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:02.800617 1122246 system_pods.go:61] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:26:02.800626 1122246 system_pods.go:74] duration metric: took 4.612821ms to wait for pod list to return data ...
	I1002 21:26:02.800639 1122246 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:26:02.804397 1122246 default_sa.go:45] found service account: "default"
	I1002 21:26:02.804423 1122246 default_sa.go:55] duration metric: took 3.769446ms for default service account to be created ...
	I1002 21:26:02.804444 1122246 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:26:02.902921 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:02.902955 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:02.902965 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:02.902996 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 21:26:02.903012 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:02.903019 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:02.903030 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 21:26:02.903037 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:02.903045 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:26:02.903072 1122246 retry.go:31] will retry after 251.824632ms: missing components: kube-controller-manager
	I1002 21:26:03.158534 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:03.158571 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:03.158581 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:03.158590 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 21:26:03.158598 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:03.158607 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:03.158623 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 21:26:03.158632 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:03.158639 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:26:03.158658 1122246 retry.go:31] will retry after 321.616788ms: missing components: kube-controller-manager
	I1002 21:26:03.484241 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:03.484279 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:03.484307 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:03.484321 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:03.484329 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:03.484340 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:03.484345 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:03.484351 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:03.484365 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:03.484388 1122246 retry.go:31] will retry after 415.139037ms: missing components: kube-controller-manager
	I1002 21:26:03.902674 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:03.902713 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:03.902724 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:03.902734 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:03.902741 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:03.902785 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:03.902801 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:03.902809 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:03.902813 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:03.902829 1122246 retry.go:31] will retry after 600.510316ms: missing components: kube-controller-manager
	I1002 21:26:04.506994 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:04.507032 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:04.507042 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:04.507048 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:04.507055 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:04.507068 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:04.507078 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:04.507084 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:04.507088 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:04.507103 1122246 retry.go:31] will retry after 713.078464ms: missing components: kube-controller-manager
	I1002 21:26:05.223866 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:05.223903 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:05.223913 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:05.223918 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:05.223925 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:05.223933 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:05.223938 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:05.223944 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:05.223955 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:05.223969 1122246 retry.go:31] will retry after 676.237831ms: missing components: kube-controller-manager
	I1002 21:26:05.903834 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:05.903873 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:05.903883 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:05.903922 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:05.903935 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:05.903942 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:05.903948 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:05.903959 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:05.903969 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:05.904010 1122246 retry.go:31] will retry after 1.006852447s: missing components: kube-controller-manager
	I1002 21:26:06.914315 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:06.914354 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:06.914370 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:06.914384 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:06.914391 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:06.914407 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:06.914424 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:06.914431 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:06.914435 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:06.914451 1122246 retry.go:31] will retry after 1.063662097s: missing components: kube-controller-manager
	I1002 21:26:07.981410 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:07.981448 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:07.981459 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:07.981465 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:07.981471 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:07.981477 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:07.981483 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:07.981493 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:07.981503 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:07.981517 1122246 retry.go:31] will retry after 1.610085091s: missing components: kube-controller-manager
	I1002 21:26:09.596456 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:09.596493 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:09.596501 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:09.596507 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:09.596514 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:09.596520 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:09.596525 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:09.596530 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:09.596534 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:09.596548 1122246 retry.go:31] will retry after 1.447961148s: missing components: kube-controller-manager
	I1002 21:26:11.047611 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:11.047646 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:11.047654 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:11.047659 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:11.047671 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:11.047678 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:11.047683 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:11.047689 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:11.047694 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:11.047712 1122246 retry.go:31] will retry after 2.479608296s: missing components: kube-controller-manager
	I1002 21:26:13.531000 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:13.531038 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:13.531099 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:13.531145 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:13.531151 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:13.531162 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:13.531167 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:13.531190 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:13.531200 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:13.531215 1122246 retry.go:31] will retry after 3.60172407s: missing components: kube-controller-manager
	I1002 21:26:17.137025 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:17.137060 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:17.137069 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:17.137075 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:17.137079 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:26:17.137086 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:17.137090 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:17.137095 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:26:17.137099 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:17.137113 1122246 retry.go:31] will retry after 4.032132552s: missing components: kube-controller-manager
	I1002 21:26:21.173378 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:21.173412 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:21.173420 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:21.173426 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:21.173430 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:26:21.173437 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:21.173441 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:21.173446 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:26:21.173450 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:21.173469 1122246 retry.go:31] will retry after 3.818641123s: missing components: kube-controller-manager
	I1002 21:26:24.997370 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:24.997407 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:24.997415 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:24.997421 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:24.997432 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:26:24.997439 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:24.997447 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:24.997458 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:26:24.997463 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:24.997478 1122246 retry.go:31] will retry after 5.221439233s: missing components: kube-controller-manager
	I1002 21:26:30.222874 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:30.222916 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:30.222928 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:30.222934 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:30.222938 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:26:30.222947 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:30.222951 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:30.222958 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:26:30.222967 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:30.222982 1122246 retry.go:31] will retry after 5.43892632s: missing components: kube-controller-manager
	I1002 21:26:35.665150 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:35.665188 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:35.665196 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:35.665202 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:35.665207 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:26:35.665214 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:35.665218 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:35.665223 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:26:35.665227 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:35.665241 1122246 retry.go:31] will retry after 8.92061201s: missing components: kube-controller-manager
	I1002 21:26:44.590807 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:44.590842 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:26:44.590850 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:44.590855 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:44.590859 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:26:44.590866 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:44.590871 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:44.590876 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:26:44.590880 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:44.590895 1122246 retry.go:31] will retry after 10.39177209s: missing components: kube-controller-manager
	I1002 21:26:54.986114 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:54.986145 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:26:54.986152 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:54.986156 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:54.986160 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:26:54.986168 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:54.986172 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:54.986178 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:26:54.986183 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:54.986197 1122246 retry.go:31] will retry after 11.58542642s: missing components: kube-controller-manager
	I1002 21:27:06.577626 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:27:06.577656 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:27:06.577664 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:27:06.577668 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:27:06.577673 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:27:06.577680 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:27:06.577684 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:27:06.577690 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:27:06.577694 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:27:06.577707 1122246 retry.go:31] will retry after 16.95306995s: missing components: kube-controller-manager
	I1002 21:27:23.534179 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:27:23.534210 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:27:23.534217 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:27:23.534221 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:27:23.534225 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:27:23.534233 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:27:23.534239 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:27:23.534245 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:27:23.534249 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:27:23.534263 1122246 retry.go:31] will retry after 26.268394846s: missing components: kube-controller-manager
	I1002 21:27:49.807333 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:27:49.807366 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:27:49.807374 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:27:49.807379 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:27:49.807384 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:27:49.807391 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:27:49.807396 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:27:49.807401 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:27:49.807405 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:27:49.807419 1122246 retry.go:31] will retry after 29.932549952s: missing components: kube-controller-manager
	I1002 21:28:19.744977 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:28:19.745007 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:28:19.745014 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:28:19.745018 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:28:19.745023 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:28:19.745029 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:28:19.745034 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:28:19.745039 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:28:19.745043 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:28:19.745057 1122246 retry.go:31] will retry after 35.517252142s: missing components: kube-controller-manager
	I1002 21:28:55.266570 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:28:55.266607 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:28:55.266615 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:28:55.266620 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:28:55.266625 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:28:55.266633 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:28:55.266637 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:28:55.266644 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:28:55.266648 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:28:55.266662 1122246 retry.go:31] will retry after 39.008898996s: missing components: kube-controller-manager
	I1002 21:29:34.278965 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:29:34.279000 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:29:34.279008 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:29:34.279013 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:29:34.279017 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:29:34.279024 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:29:34.279029 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:29:34.279034 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:29:34.279038 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:29:34.279053 1122246 retry.go:31] will retry after 57.102428017s: missing components: kube-controller-manager
	I1002 21:30:31.384608 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:30:31.384644 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:30:31.384652 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:30:31.384656 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:30:31.384661 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:30:31.384669 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:30:31.384673 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:30:31.384679 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:30:31.384685 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:30:31.384707 1122246 retry.go:31] will retry after 1m8.0959651s: missing components: kube-controller-manager
	I1002 21:31:39.484218 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:31:39.484247 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:31:39.484254 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:31:39.484258 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:31:39.484263 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:31:39.484271 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:31:39.484276 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:31:39.484280 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:31:39.484284 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:31:39.487560 1122246 out.go:203] 
	W1002 21:31:39.490908 1122246 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-controller-manager
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-controller-manager
	W1002 21:31:39.490933 1122246 out.go:285] * 
	* 
	W1002 21:31:39.493070 1122246 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:31:39.496153 1122246 out.go:203] 

                                                
                                                
** /stderr **
preload_test.go:67: out/minikube-linux-arm64 start -p test-preload-731213 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio failed: exit status 80
panic.go:636: *** TestPreload FAILED at 2025-10-02 21:31:39.540933263 +0000 UTC m=+4415.640869761
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect test-preload-731213
helpers_test.go:243: (dbg) docker inspect test-preload-731213:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "983787584c2c64dcf52e38f72a67d4ce3c220f51d6562eabcd80f74af551ed79",
	        "Created": "2025-10-02T21:24:21.749389053Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1122372,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:25:50.289906057Z",
	            "FinishedAt": "2025-10-02T21:25:27.009179444Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/983787584c2c64dcf52e38f72a67d4ce3c220f51d6562eabcd80f74af551ed79/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/983787584c2c64dcf52e38f72a67d4ce3c220f51d6562eabcd80f74af551ed79/hostname",
	        "HostsPath": "/var/lib/docker/containers/983787584c2c64dcf52e38f72a67d4ce3c220f51d6562eabcd80f74af551ed79/hosts",
	        "LogPath": "/var/lib/docker/containers/983787584c2c64dcf52e38f72a67d4ce3c220f51d6562eabcd80f74af551ed79/983787584c2c64dcf52e38f72a67d4ce3c220f51d6562eabcd80f74af551ed79-json.log",
	        "Name": "/test-preload-731213",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-731213:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-731213",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "983787584c2c64dcf52e38f72a67d4ce3c220f51d6562eabcd80f74af551ed79",
	                "LowerDir": "/var/lib/docker/overlay2/f038cfc5e4c88b563510e2172ecc3cd2c93a407523b267aa8d9cbd8e4f7941bd-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f038cfc5e4c88b563510e2172ecc3cd2c93a407523b267aa8d9cbd8e4f7941bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f038cfc5e4c88b563510e2172ecc3cd2c93a407523b267aa8d9cbd8e4f7941bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f038cfc5e4c88b563510e2172ecc3cd2c93a407523b267aa8d9cbd8e4f7941bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-731213",
	                "Source": "/var/lib/docker/volumes/test-preload-731213/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-731213",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-731213",
	                "name.minikube.sigs.k8s.io": "test-preload-731213",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0b4ae5680b07b29b158a7e1a054564b8c5597a12e58ea05a06b8b394c6a5df9",
	            "SandboxKey": "/var/run/docker/netns/a0b4ae5680b0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34091"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34092"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34095"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34093"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34094"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-731213": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:aa:be:23:fe:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "01d2009e46232c1fb8e8dfc25be78cf3d3780148b0722e961198ae4b0ef48d16",
	                    "EndpointID": "66081f941e1c93fb199efa650d11e857665b28270c828052a96a2aa447d079de",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "test-preload-731213",
	                        "983787584c2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p test-preload-731213 -n test-preload-731213
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-731213 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p test-preload-731213 logs -n 25: (1.129300143s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ multinode-633145 cp multinode-633145-m03:/home/docker/cp-test.txt multinode-633145:/home/docker/cp-test_multinode-633145-m03_multinode-633145.txt         │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │ 02 Oct 25 21:20 UTC │
	│ ssh     │ multinode-633145 ssh -n multinode-633145-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │ 02 Oct 25 21:20 UTC │
	│ ssh     │ multinode-633145 ssh -n multinode-633145 sudo cat /home/docker/cp-test_multinode-633145-m03_multinode-633145.txt                                          │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │ 02 Oct 25 21:20 UTC │
	│ cp      │ multinode-633145 cp multinode-633145-m03:/home/docker/cp-test.txt multinode-633145-m02:/home/docker/cp-test_multinode-633145-m03_multinode-633145-m02.txt │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │ 02 Oct 25 21:20 UTC │
	│ ssh     │ multinode-633145 ssh -n multinode-633145-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │ 02 Oct 25 21:20 UTC │
	│ ssh     │ multinode-633145 ssh -n multinode-633145-m02 sudo cat /home/docker/cp-test_multinode-633145-m03_multinode-633145-m02.txt                                  │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │ 02 Oct 25 21:20 UTC │
	│ node    │ multinode-633145 node stop m03                                                                                                                            │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │ 02 Oct 25 21:20 UTC │
	│ node    │ multinode-633145 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │ 02 Oct 25 21:20 UTC │
	│ node    │ list -p multinode-633145                                                                                                                                  │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ stop    │ -p multinode-633145                                                                                                                                       │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │ 02 Oct 25 21:21 UTC │
	│ start   │ -p multinode-633145 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │ 02 Oct 25 21:22 UTC │
	│ node    │ list -p multinode-633145                                                                                                                                  │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │                     │
	│ node    │ multinode-633145 node delete m03                                                                                                                          │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │ 02 Oct 25 21:22 UTC │
	│ stop    │ multinode-633145 stop                                                                                                                                     │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │ 02 Oct 25 21:22 UTC │
	│ start   │ -p multinode-633145 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio                                                          │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │ 02 Oct 25 21:23 UTC │
	│ node    │ list -p multinode-633145                                                                                                                                  │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:23 UTC │                     │
	│ start   │ -p multinode-633145-m02 --driver=docker  --container-runtime=crio                                                                                         │ multinode-633145-m02 │ jenkins │ v1.37.0 │ 02 Oct 25 21:23 UTC │                     │
	│ start   │ -p multinode-633145-m03 --driver=docker  --container-runtime=crio                                                                                         │ multinode-633145-m03 │ jenkins │ v1.37.0 │ 02 Oct 25 21:23 UTC │ 02 Oct 25 21:24 UTC │
	│ node    │ add -p multinode-633145                                                                                                                                   │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:24 UTC │                     │
	│ delete  │ -p multinode-633145-m03                                                                                                                                   │ multinode-633145-m03 │ jenkins │ v1.37.0 │ 02 Oct 25 21:24 UTC │ 02 Oct 25 21:24 UTC │
	│ delete  │ -p multinode-633145                                                                                                                                       │ multinode-633145     │ jenkins │ v1.37.0 │ 02 Oct 25 21:24 UTC │ 02 Oct 25 21:24 UTC │
	│ start   │ -p test-preload-731213 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0 │ test-preload-731213  │ jenkins │ v1.37.0 │ 02 Oct 25 21:24 UTC │ 02 Oct 25 21:25 UTC │
	│ image   │ test-preload-731213 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-731213  │ jenkins │ v1.37.0 │ 02 Oct 25 21:25 UTC │ 02 Oct 25 21:25 UTC │
	│ stop    │ -p test-preload-731213                                                                                                                                    │ test-preload-731213  │ jenkins │ v1.37.0 │ 02 Oct 25 21:25 UTC │ 02 Oct 25 21:25 UTC │
	│ start   │ -p test-preload-731213 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                         │ test-preload-731213  │ jenkins │ v1.37.0 │ 02 Oct 25 21:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:25:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:25:27.323784 1122246 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:25:27.324355 1122246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:25:27.324682 1122246 out.go:374] Setting ErrFile to fd 2...
	I1002 21:25:27.324713 1122246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:25:27.325007 1122246 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:25:27.325427 1122246 out.go:368] Setting JSON to false
	I1002 21:25:27.326338 1122246 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":22065,"bootTime":1759418263,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:25:27.326435 1122246 start.go:140] virtualization:  
	I1002 21:25:27.329447 1122246 out.go:179] * [test-preload-731213] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:25:27.333282 1122246 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:25:27.333479 1122246 notify.go:221] Checking for updates...
	I1002 21:25:27.339266 1122246 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:25:27.342096 1122246 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:25:27.344939 1122246 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:25:27.347792 1122246 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:25:27.350663 1122246 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:25:27.353950 1122246 config.go:182] Loaded profile config "test-preload-731213": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1002 21:25:27.357532 1122246 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1002 21:25:27.360316 1122246 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:25:27.392581 1122246 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:25:27.392707 1122246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:25:27.450898 1122246 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 21:25:27.441757791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:25:27.451014 1122246 docker.go:319] overlay module found
	I1002 21:25:27.454135 1122246 out.go:179] * Using the docker driver based on existing profile
	I1002 21:25:27.457026 1122246 start.go:306] selected driver: docker
	I1002 21:25:27.457042 1122246 start.go:936] validating driver "docker" against &{Name:test-preload-731213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-731213 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:25:27.457152 1122246 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:25:27.457916 1122246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:25:27.523119 1122246 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 21:25:27.514163519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:25:27.523557 1122246 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:25:27.523588 1122246 cni.go:84] Creating CNI manager for ""
	I1002 21:25:27.523651 1122246 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:25:27.523695 1122246 start.go:350] cluster config:
	{Name:test-preload-731213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-731213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:25:27.526866 1122246 out.go:179] * Starting "test-preload-731213" primary control-plane node in "test-preload-731213" cluster
	I1002 21:25:27.529642 1122246 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:25:27.532587 1122246 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:25:27.535414 1122246 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1002 21:25:27.535508 1122246 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:25:27.554717 1122246 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:25:27.554741 1122246 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:25:27.581462 1122246 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1002 21:25:27.581496 1122246 cache.go:59] Caching tarball of preloaded images
	I1002 21:25:27.581678 1122246 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1002 21:25:27.584743 1122246 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1002 21:25:27.587618 1122246 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1002 21:25:27.676371 1122246 preload.go:290] Got checksum from GCS API "d3dc3b83b826438926b7b91af837ed7b"
	I1002 21:25:27.676423 1122246 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:d3dc3b83b826438926b7b91af837ed7b -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1002 21:25:50.230201 1122246 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1002 21:25:50.230351 1122246 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/config.json ...
	I1002 21:25:50.230579 1122246 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:25:50.230611 1122246 start.go:361] acquireMachinesLock for test-preload-731213: {Name:mk170b5bf5ce354729553a340fce0dd91742257e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:25:50.230693 1122246 start.go:365] duration metric: took 54.702µs to acquireMachinesLock for "test-preload-731213"
	I1002 21:25:50.230711 1122246 start.go:97] Skipping create...Using existing machine configuration
	I1002 21:25:50.230717 1122246 fix.go:55] fixHost starting: 
	I1002 21:25:50.231002 1122246 cli_runner.go:164] Run: docker container inspect test-preload-731213 --format={{.State.Status}}
	I1002 21:25:50.252215 1122246 fix.go:113] recreateIfNeeded on test-preload-731213: state=Stopped err=<nil>
	W1002 21:25:50.252261 1122246 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 21:25:50.255469 1122246 out.go:252] * Restarting existing docker container for "test-preload-731213" ...
	I1002 21:25:50.255560 1122246 cli_runner.go:164] Run: docker start test-preload-731213
	I1002 21:25:50.517353 1122246 cli_runner.go:164] Run: docker container inspect test-preload-731213 --format={{.State.Status}}
	I1002 21:25:50.536841 1122246 kic.go:430] container "test-preload-731213" state is running.
	I1002 21:25:50.537230 1122246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-731213
	I1002 21:25:50.566274 1122246 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/config.json ...
	I1002 21:25:50.566902 1122246 machine.go:93] provisionDockerMachine start ...
	I1002 21:25:50.566969 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:50.588115 1122246 main.go:141] libmachine: Using SSH client type: native
	I1002 21:25:50.588455 1122246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34091 <nil> <nil>}
	I1002 21:25:50.588468 1122246 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:25:50.590182 1122246 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 21:25:53.721663 1122246 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-731213
	
	I1002 21:25:53.721759 1122246 ubuntu.go:182] provisioning hostname "test-preload-731213"
	I1002 21:25:53.721861 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:53.739182 1122246 main.go:141] libmachine: Using SSH client type: native
	I1002 21:25:53.739490 1122246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34091 <nil> <nil>}
	I1002 21:25:53.739505 1122246 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-731213 && echo "test-preload-731213" | sudo tee /etc/hostname
	I1002 21:25:53.879469 1122246 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-731213
	
	I1002 21:25:53.879606 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:53.897467 1122246 main.go:141] libmachine: Using SSH client type: native
	I1002 21:25:53.897790 1122246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34091 <nil> <nil>}
	I1002 21:25:53.897813 1122246 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-731213' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-731213/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-731213' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:25:54.030656 1122246 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:25:54.030684 1122246 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:25:54.030706 1122246 ubuntu.go:190] setting up certificates
	I1002 21:25:54.030716 1122246 provision.go:84] configureAuth start
	I1002 21:25:54.030779 1122246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-731213
	I1002 21:25:54.051461 1122246 provision.go:143] copyHostCerts
	I1002 21:25:54.051539 1122246 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:25:54.051561 1122246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:25:54.051644 1122246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:25:54.051759 1122246 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:25:54.051772 1122246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:25:54.051801 1122246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:25:54.051863 1122246 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:25:54.051871 1122246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:25:54.051895 1122246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:25:54.051944 1122246 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.test-preload-731213 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-731213]
	I1002 21:25:54.568261 1122246 provision.go:177] copyRemoteCerts
	I1002 21:25:54.568337 1122246 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:25:54.568378 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:54.585219 1122246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34091 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/test-preload-731213/id_rsa Username:docker}
	I1002 21:25:54.681995 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:25:54.700982 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1002 21:25:54.718163 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:25:54.735369 1122246 provision.go:87] duration metric: took 704.630404ms to configureAuth
	I1002 21:25:54.735402 1122246 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:25:54.735601 1122246 config.go:182] Loaded profile config "test-preload-731213": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1002 21:25:54.735711 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:54.752853 1122246 main.go:141] libmachine: Using SSH client type: native
	I1002 21:25:54.753173 1122246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34091 <nil> <nil>}
	I1002 21:25:54.753193 1122246 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:25:55.031129 1122246 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:25:55.031220 1122246 machine.go:96] duration metric: took 4.464302471s to provisionDockerMachine
	I1002 21:25:55.031247 1122246 start.go:294] postStartSetup for "test-preload-731213" (driver="docker")
	I1002 21:25:55.031286 1122246 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:25:55.031407 1122246 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:25:55.031473 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:55.055216 1122246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34091 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/test-preload-731213/id_rsa Username:docker}
	I1002 21:25:55.150458 1122246 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:25:55.153893 1122246 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:25:55.153920 1122246 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:25:55.153931 1122246 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:25:55.153991 1122246 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:25:55.154102 1122246 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:25:55.154206 1122246 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:25:55.162722 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:25:55.182441 1122246 start.go:297] duration metric: took 151.151436ms for postStartSetup
	I1002 21:25:55.182527 1122246 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:25:55.182576 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:55.200580 1122246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34091 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/test-preload-731213/id_rsa Username:docker}
	I1002 21:25:55.295120 1122246 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:25:55.299658 1122246 fix.go:57] duration metric: took 5.068932468s for fixHost
	I1002 21:25:55.299683 1122246 start.go:84] releasing machines lock for "test-preload-731213", held for 5.068977874s
	I1002 21:25:55.299760 1122246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-731213
	I1002 21:25:55.319429 1122246 ssh_runner.go:195] Run: cat /version.json
	I1002 21:25:55.319471 1122246 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:25:55.319483 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:55.319532 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:55.346429 1122246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34091 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/test-preload-731213/id_rsa Username:docker}
	I1002 21:25:55.349130 1122246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34091 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/test-preload-731213/id_rsa Username:docker}
	I1002 21:25:55.529505 1122246 ssh_runner.go:195] Run: systemctl --version
	I1002 21:25:55.535891 1122246 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:25:55.572265 1122246 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:25:55.576621 1122246 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:25:55.576693 1122246 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:25:55.584403 1122246 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:25:55.584425 1122246 start.go:496] detecting cgroup driver to use...
	I1002 21:25:55.584456 1122246 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:25:55.584504 1122246 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:25:55.599309 1122246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:25:55.612059 1122246 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:25:55.612132 1122246 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:25:55.627640 1122246 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:25:55.640507 1122246 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:25:55.745495 1122246 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:25:55.859050 1122246 docker.go:234] disabling docker service ...
	I1002 21:25:55.859130 1122246 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:25:55.874286 1122246 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:25:55.887113 1122246 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:25:56.006243 1122246 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:25:56.133868 1122246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:25:56.146727 1122246 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:25:56.161194 1122246 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1002 21:25:56.161260 1122246 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:25:56.170314 1122246 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:25:56.170465 1122246 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:25:56.179208 1122246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:25:56.187852 1122246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:25:56.196703 1122246 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:25:56.205155 1122246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:25:56.214558 1122246 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:25:56.223082 1122246 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:25:56.232122 1122246 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:25:56.239659 1122246 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:25:56.247121 1122246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:25:56.353590 1122246 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:25:56.477960 1122246 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:25:56.478122 1122246 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:25:56.482582 1122246 start.go:564] Will wait 60s for crictl version
	I1002 21:25:56.482729 1122246 ssh_runner.go:195] Run: which crictl
	I1002 21:25:56.486659 1122246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:25:56.512027 1122246 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:25:56.512115 1122246 ssh_runner.go:195] Run: crio --version
	I1002 21:25:56.545070 1122246 ssh_runner.go:195] Run: crio --version
	I1002 21:25:56.578187 1122246 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	I1002 21:25:56.579882 1122246 cli_runner.go:164] Run: docker network inspect test-preload-731213 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:25:56.598285 1122246 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 21:25:56.602509 1122246 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:25:56.612549 1122246 kubeadm.go:883] updating cluster {Name:test-preload-731213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-731213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:25:56.612664 1122246 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1002 21:25:56.612719 1122246 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:25:56.644946 1122246 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:25:56.644969 1122246 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:25:56.645032 1122246 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:25:56.672352 1122246 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:25:56.672379 1122246 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:25:56.672387 1122246 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1002 21:25:56.672484 1122246 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=test-preload-731213 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-731213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:25:56.672571 1122246 ssh_runner.go:195] Run: crio config
	I1002 21:25:56.741437 1122246 cni.go:84] Creating CNI manager for ""
	I1002 21:25:56.741462 1122246 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:25:56.741475 1122246 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:25:56.741521 1122246 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-731213 NodeName:test-preload-731213 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:25:56.741741 1122246 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-731213"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:25:56.741841 1122246 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1002 21:25:56.749454 1122246 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:25:56.749590 1122246 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:25:56.757242 1122246 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1002 21:25:56.770098 1122246 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:25:56.782753 1122246 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1002 21:25:56.795644 1122246 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:25:56.799054 1122246 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:25:56.808363 1122246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:25:56.914573 1122246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:25:56.929616 1122246 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213 for IP: 192.168.76.2
	I1002 21:25:56.929677 1122246 certs.go:195] generating shared ca certs ...
	I1002 21:25:56.929709 1122246 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:25:56.929864 1122246 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:25:56.929930 1122246 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:25:56.929953 1122246 certs.go:257] generating profile certs ...
	I1002 21:25:56.930174 1122246 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/client.key
	I1002 21:25:56.930283 1122246 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/apiserver.key.07a46334
	I1002 21:25:56.930355 1122246 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/proxy-client.key
	I1002 21:25:56.930485 1122246 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:25:56.930541 1122246 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:25:56.930584 1122246 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:25:56.930630 1122246 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:25:56.930682 1122246 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:25:56.930725 1122246 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:25:56.930794 1122246 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:25:56.931382 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:25:56.950570 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:25:56.969093 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:25:56.987780 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:25:57.008910 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1002 21:25:57.028436 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:25:57.047010 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:25:57.069387 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:25:57.095721 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:25:57.120390 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:25:57.143300 1122246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:25:57.162433 1122246 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:25:57.175066 1122246 ssh_runner.go:195] Run: openssl version
	I1002 21:25:57.181043 1122246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:25:57.190020 1122246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:25:57.194781 1122246 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:25:57.194858 1122246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:25:57.236720 1122246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:25:57.244876 1122246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:25:57.253363 1122246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:25:57.257047 1122246 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:25:57.257108 1122246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:25:57.298872 1122246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:25:57.306654 1122246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:25:57.314771 1122246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:25:57.318580 1122246 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:25:57.318644 1122246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:25:57.360672 1122246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:25:57.368537 1122246 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:25:57.372277 1122246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:25:57.413000 1122246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:25:57.454361 1122246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:25:57.495452 1122246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:25:57.543863 1122246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:25:57.594163 1122246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:25:57.664027 1122246 kubeadm.go:400] StartCluster: {Name:test-preload-731213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-731213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:25:57.664154 1122246 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:25:57.664260 1122246 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:25:57.756496 1122246 cri.go:89] found id: "db921aa74c72ed857d6d93672188d934ac1dd700cff01d3ff816b86737f499e6"
	I1002 21:25:57.756522 1122246 cri.go:89] found id: "b86b5cbe0416fa59f31776431f594cc7379b987139f3e4f95f67a028888d2ce2"
	I1002 21:25:57.756528 1122246 cri.go:89] found id: "7547cdcf2123708b60377f5fcaa59e29beba6832d2d16a16a3a45be99c942ccd"
	I1002 21:25:57.756533 1122246 cri.go:89] found id: ""
	I1002 21:25:57.756604 1122246 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 21:25:57.782667 1122246 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:25:57Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:25:57.782752 1122246 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:25:57.796835 1122246 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:25:57.796870 1122246 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:25:57.796939 1122246 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:25:57.808145 1122246 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:25:57.808605 1122246 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-731213" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:25:57.808735 1122246 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-992084/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-731213" cluster setting kubeconfig missing "test-preload-731213" context setting]
	I1002 21:25:57.809051 1122246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:25:57.809682 1122246 kapi.go:59] client config for test-preload-731213: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/client.key", CAFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:25:57.810399 1122246 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 21:25:57.810430 1122246 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 21:25:57.810436 1122246 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 21:25:57.810441 1122246 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 21:25:57.810445 1122246 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 21:25:57.810795 1122246 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:25:57.822706 1122246 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 21:25:57.822753 1122246 kubeadm.go:601] duration metric: took 25.863417ms to restartPrimaryControlPlane
	I1002 21:25:57.822766 1122246 kubeadm.go:402] duration metric: took 158.747072ms to StartCluster
	I1002 21:25:57.822794 1122246 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:25:57.822873 1122246 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:25:57.823599 1122246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:25:57.823809 1122246 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:25:57.824132 1122246 config.go:182] Loaded profile config "test-preload-731213": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1002 21:25:57.824189 1122246 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:25:57.824323 1122246 addons.go:69] Setting storage-provisioner=true in profile "test-preload-731213"
	I1002 21:25:57.824343 1122246 addons.go:238] Setting addon storage-provisioner=true in "test-preload-731213"
	W1002 21:25:57.824349 1122246 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:25:57.824388 1122246 host.go:66] Checking if "test-preload-731213" exists ...
	I1002 21:25:57.824940 1122246 cli_runner.go:164] Run: docker container inspect test-preload-731213 --format={{.State.Status}}
	I1002 21:25:57.825195 1122246 addons.go:69] Setting default-storageclass=true in profile "test-preload-731213"
	I1002 21:25:57.825222 1122246 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-731213"
	I1002 21:25:57.825521 1122246 cli_runner.go:164] Run: docker container inspect test-preload-731213 --format={{.State.Status}}
	I1002 21:25:57.826380 1122246 out.go:179] * Verifying Kubernetes components...
	I1002 21:25:57.830289 1122246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:25:57.866567 1122246 kapi.go:59] client config for test-preload-731213: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/test-preload-731213/client.key", CAFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:25:57.866962 1122246 addons.go:238] Setting addon default-storageclass=true in "test-preload-731213"
	W1002 21:25:57.866975 1122246 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:25:57.867001 1122246 host.go:66] Checking if "test-preload-731213" exists ...
	I1002 21:25:57.867573 1122246 cli_runner.go:164] Run: docker container inspect test-preload-731213 --format={{.State.Status}}
	I1002 21:25:57.879772 1122246 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:25:57.880986 1122246 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:25:57.881018 1122246 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:25:57.881086 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:57.906350 1122246 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:25:57.906370 1122246 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:25:57.906431 1122246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-731213
	I1002 21:25:57.930875 1122246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34091 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/test-preload-731213/id_rsa Username:docker}
	I1002 21:25:57.946252 1122246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34091 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/test-preload-731213/id_rsa Username:docker}
	I1002 21:25:58.123426 1122246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:25:58.135459 1122246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:25:58.195049 1122246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:26:02.241600 1122246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.118135137s)
	I1002 21:26:02.241665 1122246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.106181739s)
	I1002 21:26:02.242002 1122246 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.046927221s)
	I1002 21:26:02.242057 1122246 node_ready.go:35] waiting up to 6m0s for node "test-preload-731213" to be "Ready" ...
	I1002 21:26:02.270377 1122246 node_ready.go:49] node "test-preload-731213" is "Ready"
	I1002 21:26:02.270408 1122246 node_ready.go:38] duration metric: took 28.327879ms for node "test-preload-731213" to be "Ready" ...
	I1002 21:26:02.270420 1122246 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:26:02.270487 1122246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:26:02.285392 1122246 api_server.go:72] duration metric: took 4.461547803s to wait for apiserver process to appear ...
	I1002 21:26:02.285462 1122246 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:26:02.285495 1122246 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:26:02.294823 1122246 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:26:02.294851 1122246 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:26:02.297441 1122246 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 21:26:02.298545 1122246 addons.go:514] duration metric: took 4.474340992s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 21:26:02.785681 1122246 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:26:02.794935 1122246 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 21:26:02.795949 1122246 api_server.go:141] control plane version: v1.32.0
	I1002 21:26:02.795977 1122246 api_server.go:131] duration metric: took 510.494941ms to wait for apiserver health ...
	I1002 21:26:02.796006 1122246 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:26:02.800483 1122246 system_pods.go:59] 8 kube-system pods found
	I1002 21:26:02.800534 1122246 system_pods.go:61] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:02.800545 1122246 system_pods.go:61] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:02.800553 1122246 system_pods.go:61] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 21:26:02.800562 1122246 system_pods.go:61] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:02.800584 1122246 system_pods.go:61] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:02.800592 1122246 system_pods.go:61] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 21:26:02.800604 1122246 system_pods.go:61] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:02.800617 1122246 system_pods.go:61] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:26:02.800626 1122246 system_pods.go:74] duration metric: took 4.612821ms to wait for pod list to return data ...
	I1002 21:26:02.800639 1122246 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:26:02.804397 1122246 default_sa.go:45] found service account: "default"
	I1002 21:26:02.804423 1122246 default_sa.go:55] duration metric: took 3.769446ms for default service account to be created ...
	I1002 21:26:02.804444 1122246 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:26:02.902921 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:02.902955 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:02.902965 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:02.902996 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 21:26:02.903012 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:02.903019 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:02.903030 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 21:26:02.903037 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:02.903045 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:26:02.903072 1122246 retry.go:31] will retry after 251.824632ms: missing components: kube-controller-manager
	I1002 21:26:03.158534 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:03.158571 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:03.158581 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:03.158590 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 21:26:03.158598 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:03.158607 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:03.158623 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 21:26:03.158632 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:03.158639 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:26:03.158658 1122246 retry.go:31] will retry after 321.616788ms: missing components: kube-controller-manager
	I1002 21:26:03.484241 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:03.484279 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:03.484307 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:03.484321 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:03.484329 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:03.484340 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:03.484345 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:03.484351 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:03.484365 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:03.484388 1122246 retry.go:31] will retry after 415.139037ms: missing components: kube-controller-manager
	I1002 21:26:03.902674 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:03.902713 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:03.902724 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:03.902734 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:03.902741 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:03.902785 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:03.902801 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:03.902809 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:03.902813 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:03.902829 1122246 retry.go:31] will retry after 600.510316ms: missing components: kube-controller-manager
	I1002 21:26:04.506994 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:04.507032 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:04.507042 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:04.507048 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:04.507055 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:04.507068 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:04.507078 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:04.507084 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:04.507088 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:04.507103 1122246 retry.go:31] will retry after 713.078464ms: missing components: kube-controller-manager
	I1002 21:26:05.223866 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:05.223903 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:05.223913 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:05.223918 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:05.223925 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:05.223933 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:05.223938 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:05.223944 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:05.223955 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:05.223969 1122246 retry.go:31] will retry after 676.237831ms: missing components: kube-controller-manager
	I1002 21:26:05.903834 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:05.903873 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:05.903883 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:05.903922 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:05.903935 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:05.903942 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:05.903948 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:05.903959 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:05.903969 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:05.904010 1122246 retry.go:31] will retry after 1.006852447s: missing components: kube-controller-manager
	I1002 21:26:06.914315 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:06.914354 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:06.914370 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:06.914384 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:06.914391 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:06.914407 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:06.914424 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:06.914431 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:06.914435 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:06.914451 1122246 retry.go:31] will retry after 1.063662097s: missing components: kube-controller-manager
	I1002 21:26:07.981410 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:07.981448 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:07.981459 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:26:07.981465 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:07.981471 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:07.981477 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:07.981483 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:07.981493 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:07.981503 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:07.981517 1122246 retry.go:31] will retry after 1.610085091s: missing components: kube-controller-manager
	I1002 21:26:09.596456 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:09.596493 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:09.596501 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:09.596507 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:09.596514 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:09.596520 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:09.596525 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:09.596530 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:09.596534 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:09.596548 1122246 retry.go:31] will retry after 1.447961148s: missing components: kube-controller-manager
	I1002 21:26:11.047611 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:11.047646 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:11.047654 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:11.047659 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:11.047671 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:11.047678 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:11.047683 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:11.047689 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:11.047694 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:11.047712 1122246 retry.go:31] will retry after 2.479608296s: missing components: kube-controller-manager
	I1002 21:26:13.531000 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:13.531038 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:13.531099 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:13.531145 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:13.531151 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:26:13.531162 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:13.531167 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:13.531190 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:26:13.531200 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:13.531215 1122246 retry.go:31] will retry after 3.60172407s: missing components: kube-controller-manager
	I1002 21:26:17.137025 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:17.137060 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:17.137069 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:17.137075 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:17.137079 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:26:17.137086 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:17.137090 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:17.137095 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:26:17.137099 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:17.137113 1122246 retry.go:31] will retry after 4.032132552s: missing components: kube-controller-manager
	I1002 21:26:21.173378 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:21.173412 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:21.173420 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:21.173426 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:21.173430 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:26:21.173437 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:21.173441 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:21.173446 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:26:21.173450 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:21.173469 1122246 retry.go:31] will retry after 3.818641123s: missing components: kube-controller-manager
	I1002 21:26:24.997370 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:24.997407 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:24.997415 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:24.997421 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:24.997432 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:26:24.997439 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:24.997447 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:24.997458 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:26:24.997463 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:24.997478 1122246 retry.go:31] will retry after 5.221439233s: missing components: kube-controller-manager
	I1002 21:26:30.222874 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:30.222916 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:30.222928 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:30.222934 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:30.222938 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:26:30.222947 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:30.222951 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:30.222958 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:26:30.222967 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:30.222982 1122246 retry.go:31] will retry after 5.43892632s: missing components: kube-controller-manager
	I1002 21:26:35.665150 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:35.665188 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:26:35.665196 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:35.665202 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:35.665207 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:26:35.665214 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:35.665218 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:35.665223 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:26:35.665227 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:35.665241 1122246 retry.go:31] will retry after 8.92061201s: missing components: kube-controller-manager
	I1002 21:26:44.590807 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:44.590842 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:26:44.590850 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:44.590855 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:44.590859 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:26:44.590866 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:44.590871 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:44.590876 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:26:44.590880 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:44.590895 1122246 retry.go:31] will retry after 10.39177209s: missing components: kube-controller-manager
	I1002 21:26:54.986114 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:26:54.986145 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:26:54.986152 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:26:54.986156 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:26:54.986160 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:26:54.986168 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:26:54.986172 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:26:54.986178 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:26:54.986183 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:26:54.986197 1122246 retry.go:31] will retry after 11.58542642s: missing components: kube-controller-manager
	I1002 21:27:06.577626 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:27:06.577656 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:27:06.577664 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:27:06.577668 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:27:06.577673 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:27:06.577680 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:27:06.577684 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:27:06.577690 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:27:06.577694 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:27:06.577707 1122246 retry.go:31] will retry after 16.95306995s: missing components: kube-controller-manager
	I1002 21:27:23.534179 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:27:23.534210 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:27:23.534217 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:27:23.534221 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:27:23.534225 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:27:23.534233 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:27:23.534239 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:27:23.534245 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:27:23.534249 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:27:23.534263 1122246 retry.go:31] will retry after 26.268394846s: missing components: kube-controller-manager
	I1002 21:27:49.807333 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:27:49.807366 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:27:49.807374 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:27:49.807379 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:27:49.807384 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:27:49.807391 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:27:49.807396 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:27:49.807401 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:27:49.807405 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:27:49.807419 1122246 retry.go:31] will retry after 29.932549952s: missing components: kube-controller-manager
	I1002 21:28:19.744977 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:28:19.745007 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:28:19.745014 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:28:19.745018 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:28:19.745023 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:28:19.745029 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:28:19.745034 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:28:19.745039 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:28:19.745043 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:28:19.745057 1122246 retry.go:31] will retry after 35.517252142s: missing components: kube-controller-manager
	I1002 21:28:55.266570 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:28:55.266607 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:28:55.266615 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:28:55.266620 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:28:55.266625 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:28:55.266633 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:28:55.266637 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:28:55.266644 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:28:55.266648 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:28:55.266662 1122246 retry.go:31] will retry after 39.008898996s: missing components: kube-controller-manager
	I1002 21:29:34.278965 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:29:34.279000 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:29:34.279008 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:29:34.279013 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:29:34.279017 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:29:34.279024 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:29:34.279029 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:29:34.279034 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:29:34.279038 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:29:34.279053 1122246 retry.go:31] will retry after 57.102428017s: missing components: kube-controller-manager
	I1002 21:30:31.384608 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:30:31.384644 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:30:31.384652 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:30:31.384656 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:30:31.384661 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:30:31.384669 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:30:31.384673 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:30:31.384679 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:30:31.384685 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:30:31.384707 1122246 retry.go:31] will retry after 1m8.0959651s: missing components: kube-controller-manager
	I1002 21:31:39.484218 1122246 system_pods.go:86] 8 kube-system pods found
	I1002 21:31:39.484247 1122246 system_pods.go:89] "coredns-668d6bf9bc-86745" [3001f851-5177-4557-8dda-448785821d8a] Running
	I1002 21:31:39.484254 1122246 system_pods.go:89] "etcd-test-preload-731213" [ef5ae891-bb63-42a6-b6a9-3065cf2338d9] Running
	I1002 21:31:39.484258 1122246 system_pods.go:89] "kindnet-sgbmf" [d2226ccb-38f5-4a07-a914-723b65c45e19] Running
	I1002 21:31:39.484263 1122246 system_pods.go:89] "kube-apiserver-test-preload-731213" [def10935-52a6-4a7f-98e0-51ebda31a957] Running
	I1002 21:31:39.484271 1122246 system_pods.go:89] "kube-controller-manager-test-preload-731213" [0ac6fe01-6d70-483e-8db8-f5f675cac3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:31:39.484276 1122246 system_pods.go:89] "kube-proxy-8xb6q" [4358d1d6-67cb-48dd-adc4-14a1c24959cd] Running
	I1002 21:31:39.484280 1122246 system_pods.go:89] "kube-scheduler-test-preload-731213" [0cdd5ed3-1621-4fd4-8e2a-21de8322f2f1] Running
	I1002 21:31:39.484284 1122246 system_pods.go:89] "storage-provisioner" [2600466e-6bfb-4586-a689-8e852e74b923] Running
	I1002 21:31:39.487560 1122246 out.go:203] 
	W1002 21:31:39.490908 1122246 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-controller-manager
	W1002 21:31:39.490933 1122246 out.go:285] * 
	W1002 21:31:39.493070 1122246 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:31:39.496153 1122246 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:26:42 test-preload-731213 crio[634]: time="2025-10-02T21:26:42.910452233Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:26:42 test-preload-731213 crio[634]: time="2025-10-02T21:26:42.913877375Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:26:42 test-preload-731213 crio[634]: time="2025-10-02T21:26:42.913912271Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:26:42 test-preload-731213 crio[634]: time="2025-10-02T21:26:42.913935778Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:26:42 test-preload-731213 crio[634]: time="2025-10-02T21:26:42.917257055Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:26:42 test-preload-731213 crio[634]: time="2025-10-02T21:26:42.917291926Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:26:42 test-preload-731213 crio[634]: time="2025-10-02T21:26:42.91731549Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:26:42 test-preload-731213 crio[634]: time="2025-10-02T21:26:42.920473185Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:26:42 test-preload-731213 crio[634]: time="2025-10-02T21:26:42.920506432Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:26:42 test-preload-731213 crio[634]: time="2025-10-02T21:26:42.920527961Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:26:42 test-preload-731213 crio[634]: time="2025-10-02T21:26:42.923543982Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:26:42 test-preload-731213 crio[634]: time="2025-10-02T21:26:42.923577598Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:26:42 test-preload-731213 crio[634]: time="2025-10-02T21:26:42.923603927Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:26:42 test-preload-731213 crio[634]: time="2025-10-02T21:26:42.926751974Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:26:42 test-preload-731213 crio[634]: time="2025-10-02T21:26:42.926786418Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:26:54 test-preload-731213 crio[634]: time="2025-10-02T21:26:54.409669739Z" level=info msg="createCtr: deleting container a07112a00705eeac052bcce1df9a7f8112c7ee664af8e0f141300ac7478e0754 from storage" id=86613c61-2539-404e-b391-80fdfa01a0e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:26:54 test-preload-731213 crio[634]: time="2025-10-02T21:26:54.409957141Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/627ba4e2bc418227f7e143fbafec6740462567ea4294688c2a573a84ac96db6e/merged\": directory not empty" id=86613c61-2539-404e-b391-80fdfa01a0e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:27:23 test-preload-731213 crio[634]: time="2025-10-02T21:27:23.243192194Z" level=info msg="createCtr: deleting container a07112a00705eeac052bcce1df9a7f8112c7ee664af8e0f141300ac7478e0754 from storage" id=86613c61-2539-404e-b391-80fdfa01a0e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:27:23 test-preload-731213 crio[634]: time="2025-10-02T21:27:23.243511349Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/627ba4e2bc418227f7e143fbafec6740462567ea4294688c2a573a84ac96db6e/merged\": directory not empty" id=86613c61-2539-404e-b391-80fdfa01a0e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:06 test-preload-731213 crio[634]: time="2025-10-02T21:28:06.493004911Z" level=info msg="createCtr: deleting container a07112a00705eeac052bcce1df9a7f8112c7ee664af8e0f141300ac7478e0754 from storage" id=86613c61-2539-404e-b391-80fdfa01a0e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:06 test-preload-731213 crio[634]: time="2025-10-02T21:28:06.493360897Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/627ba4e2bc418227f7e143fbafec6740462567ea4294688c2a573a84ac96db6e/merged\": directory not empty" id=86613c61-2539-404e-b391-80fdfa01a0e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:29:11 test-preload-731213 crio[634]: time="2025-10-02T21:29:11.367690914Z" level=info msg="createCtr: deleting container a07112a00705eeac052bcce1df9a7f8112c7ee664af8e0f141300ac7478e0754 from storage" id=86613c61-2539-404e-b391-80fdfa01a0e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:29:11 test-preload-731213 crio[634]: time="2025-10-02T21:29:11.368009774Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/627ba4e2bc418227f7e143fbafec6740462567ea4294688c2a573a84ac96db6e/merged\": directory not empty" id=86613c61-2539-404e-b391-80fdfa01a0e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:30:48 test-preload-731213 crio[634]: time="2025-10-02T21:30:48.677921575Z" level=info msg="createCtr: deleting container a07112a00705eeac052bcce1df9a7f8112c7ee664af8e0f141300ac7478e0754 from storage" id=86613c61-2539-404e-b391-80fdfa01a0e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:30:48 test-preload-731213 crio[634]: time="2025-10-02T21:30:48.678250453Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/627ba4e2bc418227f7e143fbafec6740462567ea4294688c2a573a84ac96db6e/merged\": directory not empty" id=86613c61-2539-404e-b391-80fdfa01a0e6 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                  ATTEMPT             POD ID              POD                                  NAMESPACE
	3b45cc95bf468       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51   5 minutes ago       Running             storage-provisioner   2                   acc0e3a44dd04       storage-provisioner                  kube-system
	ad1d877d32e73       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   5 minutes ago       Running             coredns               1                   6ff185b23c570       coredns-668d6bf9bc-86745             kube-system
	ec320400884bd       2f50386e20bfdb3f3b38672c585959554196426c66cc1905e7e7115c47cc2e67   5 minutes ago       Running             kube-proxy            1                   afe4535129970       kube-proxy-8xb6q                     kube-system
	b09970e73a80e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 minutes ago       Running             kindnet-cni           1                   8c9c6568ea1dc       kindnet-sgbmf                        kube-system
	432e049039457       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51   5 minutes ago       Exited              storage-provisioner   1                   acc0e3a44dd04       storage-provisioner                  kube-system
	db921aa74c72e       7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82   5 minutes ago       Running             etcd                  1                   f18988a016d92       etcd-test-preload-731213             kube-system
	b86b5cbe0416f       2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc   5 minutes ago       Running             kube-apiserver        1                   cb70cbde59996       kube-apiserver-test-preload-731213   kube-system
	7547cdcf21237       c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d   5 minutes ago       Running             kube-scheduler        1                   807e68112f41c       kube-scheduler-test-preload-731213   kube-system
	
	
	==> coredns [ad1d877d32e73435394fbeefc597d76af139e59192a1ce1f3740ac4732e7629c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46692 - 55943 "HINFO IN 193875639871064597.8053494436857561924. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020093194s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[222064546]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (02-Oct-2025 21:26:02.603) (total time: 30001ms):
	Trace[222064546]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (21:26:32.604)
	Trace[222064546]: [30.001357208s] [30.001357208s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[75002927]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (02-Oct-2025 21:26:02.604) (total time: 30001ms):
	Trace[75002927]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (21:26:32.605)
	Trace[75002927]: [30.001087455s] [30.001087455s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[199593318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (02-Oct-2025 21:26:02.604) (total time: 30001ms):
	Trace[199593318]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (21:26:32.605)
	Trace[199593318]: [30.001070463s] [30.001070463s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               test-preload-731213
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=test-preload-731213
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=test-preload-731213
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_24_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:24:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-731213
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:31:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:30:17 +0000   Thu, 02 Oct 2025 21:24:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:30:17 +0000   Thu, 02 Oct 2025 21:24:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:30:17 +0000   Thu, 02 Oct 2025 21:24:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:30:17 +0000   Thu, 02 Oct 2025 21:25:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    test-preload-731213
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 e30b2def82914818b9fa337a0a4afc17
	  System UUID:                8c146bed-8b3d-43f6-ba31-303b1066ca29
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-86745                       100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m37s
	  kube-system                 etcd-test-preload-731213                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m42s
	  kube-system                 kindnet-sgbmf                                  100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m38s
	  kube-system                 kube-apiserver-test-preload-731213             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 kube-controller-manager-test-preload-731213    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 kube-proxy-8xb6q                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 kube-scheduler-test-preload-731213             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m36s                  kube-proxy       
	  Normal   Starting                 5m37s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  6m42s                  kubelet          Node test-preload-731213 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m42s                  kubelet          Node test-preload-731213 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m42s                  kubelet          Node test-preload-731213 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 6m42s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 6m42s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           6m39s                  node-controller  Node test-preload-731213 event: Registered Node test-preload-731213 in Controller
	  Normal   NodeReady                6m23s                  kubelet          Node test-preload-731213 status is now: NodeReady
	  Normal   Starting                 5m43s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m43s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m43s (x8 over 5m43s)  kubelet          Node test-preload-731213 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m43s (x8 over 5m43s)  kubelet          Node test-preload-731213 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m43s (x8 over 5m43s)  kubelet          Node test-preload-731213 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Oct 2 21:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:03] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:05] overlayfs: idmapped layers are currently not supported
	[  +3.260520] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:06] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:07] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:08] overlayfs: idmapped layers are currently not supported
	[  +3.176407] overlayfs: idmapped layers are currently not supported
	[ +43.828152] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:09] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:15] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:18] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:21] overlayfs: idmapped layers are currently not supported
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [db921aa74c72ed857d6d93672188d934ac1dd700cff01d3ff816b86737f499e6] <==
	{"level":"info","ts":"2025-10-02T21:25:58.070893Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-02T21:25:58.069983Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-10-02T21:25:58.070291Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T21:25:58.070987Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T21:25:58.071025Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T21:25:58.070455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-02T21:25:58.071750Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-02T21:25:58.071872Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T21:25:58.071930Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T21:25:58.969066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-02T21:25:58.969212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-02T21:25:58.969287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-02T21:25:58.969331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-02T21:25:58.969373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-02T21:25:58.969406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-02T21:25:58.969437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-02T21:25:58.970216Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:test-preload-731213 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-02T21:25:58.970454Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T21:25:58.970789Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T21:25:58.970959Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-02T21:25:58.971000Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-02T21:25:58.974570Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-02T21:25:58.975414Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-02T21:25:58.975908Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-02T21:25:58.976672Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:31:40 up  6:13,  0 user,  load average: 0.05, 0.62, 1.35
	Linux test-preload-731213 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b09970e73a80ee92646f509dcec7b139144b91d764564252d76f1ebcee82c9bd] <==
	I1002 21:29:32.918787       1 main.go:301] handling current node
	I1002 21:29:42.914192       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:29:42.914225       1 main.go:301] handling current node
	I1002 21:29:52.910386       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:29:52.910416       1 main.go:301] handling current node
	I1002 21:30:02.918133       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:30:02.918295       1 main.go:301] handling current node
	I1002 21:30:12.910127       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:30:12.910162       1 main.go:301] handling current node
	I1002 21:30:22.914505       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:30:22.914540       1 main.go:301] handling current node
	I1002 21:30:32.913890       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:30:32.913999       1 main.go:301] handling current node
	I1002 21:30:42.914012       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:30:42.914141       1 main.go:301] handling current node
	I1002 21:30:52.917927       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:30:52.917963       1 main.go:301] handling current node
	I1002 21:31:02.914123       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:31:02.914220       1 main.go:301] handling current node
	I1002 21:31:12.910392       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:31:12.910425       1 main.go:301] handling current node
	I1002 21:31:22.910398       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:31:22.910431       1 main.go:301] handling current node
	I1002 21:31:32.910400       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:31:32.910434       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b86b5cbe0416fa59f31776431f594cc7379b987139f3e4f95f67a028888d2ce2] <==
	I1002 21:26:01.196119       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1002 21:26:01.196158       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1002 21:26:01.196165       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 21:26:01.196178       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 21:26:01.474504       1 shared_informer.go:320] Caches are synced for configmaps
	I1002 21:26:01.497730       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1002 21:26:01.497791       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 21:26:01.498301       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 21:26:01.498323       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 21:26:01.501956       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 21:26:01.502080       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1002 21:26:01.502160       1 aggregator.go:171] initial CRD sync complete...
	I1002 21:26:01.502167       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 21:26:01.502173       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 21:26:01.502177       1 cache.go:39] Caches are synced for autoregister controller
	E1002 21:26:01.512381       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 21:26:01.551010       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1002 21:26:01.560018       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1002 21:26:01.560127       1 policy_source.go:240] refreshing policies
	I1002 21:26:01.566118       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:26:01.574142       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 21:26:01.581742       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 21:26:02.188103       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:26:02.277340       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1002 21:26:50.715446       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-proxy [ec320400884bd0732b5d45028c7b6ecbef10677c5a810c1cf4af236effe1f567] <==
	I1002 21:26:02.649894       1 server_linux.go:66] "Using iptables proxy"
	I1002 21:26:02.764362       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E1002 21:26:02.764933       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:26:02.824379       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:26:02.824438       1 server_linux.go:170] "Using iptables Proxier"
	I1002 21:26:02.826592       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:26:02.826884       1 server.go:497] "Version info" version="v1.32.0"
	I1002 21:26:02.826910       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:26:02.828511       1 config.go:199] "Starting service config controller"
	I1002 21:26:02.828604       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1002 21:26:02.828666       1 config.go:105] "Starting endpoint slice config controller"
	I1002 21:26:02.828703       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1002 21:26:02.829198       1 config.go:329] "Starting node config controller"
	I1002 21:26:02.829265       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1002 21:26:02.929753       1 shared_informer.go:320] Caches are synced for service config
	I1002 21:26:02.929770       1 shared_informer.go:320] Caches are synced for node config
	I1002 21:26:02.929793       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7547cdcf2123708b60377f5fcaa59e29beba6832d2d16a16a3a45be99c942ccd] <==
	I1002 21:25:58.738525       1 serving.go:386] Generated self-signed cert in-memory
	W1002 21:26:01.419605       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 21:26:01.419770       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 21:26:01.419814       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 21:26:01.419917       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 21:26:01.511162       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1002 21:26:01.511204       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:26:01.513775       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:26:01.513804       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 21:26:01.515086       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1002 21:26:01.515379       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:26:01.615147       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 21:26:01 test-preload-731213 kubelet[760]: I1002 21:26:01.589837     760 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-731213"
	Oct 02 21:26:01 test-preload-731213 kubelet[760]: I1002 21:26:01.589869     760 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 02 21:26:01 test-preload-731213 kubelet[760]: I1002 21:26:01.590742     760 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 21:26:01 test-preload-731213 kubelet[760]: E1002 21:26:01.614952     760 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-731213\" already exists" pod="kube-system/kube-scheduler-test-preload-731213"
	Oct 02 21:26:01 test-preload-731213 kubelet[760]: I1002 21:26:01.615029     760 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-731213"
	Oct 02 21:26:01 test-preload-731213 kubelet[760]: E1002 21:26:01.633619     760 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-731213\" already exists" pod="kube-system/etcd-test-preload-731213"
	Oct 02 21:26:01 test-preload-731213 kubelet[760]: I1002 21:26:01.633669     760 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-731213"
	Oct 02 21:26:01 test-preload-731213 kubelet[760]: E1002 21:26:01.643091     760 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-731213\" already exists" pod="kube-system/kube-apiserver-test-preload-731213"
	Oct 02 21:26:01 test-preload-731213 kubelet[760]: I1002 21:26:01.643190     760 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-731213"
	Oct 02 21:26:01 test-preload-731213 kubelet[760]: E1002 21:26:01.654745     760 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-731213\" already exists" pod="kube-system/kube-controller-manager-test-preload-731213"
	Oct 02 21:26:02 test-preload-731213 kubelet[760]: I1002 21:26:02.094178     760 apiserver.go:52] "Watching apiserver"
	Oct 02 21:26:02 test-preload-731213 kubelet[760]: I1002 21:26:02.174410     760 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 02 21:26:02 test-preload-731213 kubelet[760]: I1002 21:26:02.268039     760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2226ccb-38f5-4a07-a914-723b65c45e19-lib-modules\") pod \"kindnet-sgbmf\" (UID: \"d2226ccb-38f5-4a07-a914-723b65c45e19\") " pod="kube-system/kindnet-sgbmf"
	Oct 02 21:26:02 test-preload-731213 kubelet[760]: I1002 21:26:02.268791     760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4358d1d6-67cb-48dd-adc4-14a1c24959cd-lib-modules\") pod \"kube-proxy-8xb6q\" (UID: \"4358d1d6-67cb-48dd-adc4-14a1c24959cd\") " pod="kube-system/kube-proxy-8xb6q"
	Oct 02 21:26:02 test-preload-731213 kubelet[760]: I1002 21:26:02.268963     760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4358d1d6-67cb-48dd-adc4-14a1c24959cd-xtables-lock\") pod \"kube-proxy-8xb6q\" (UID: \"4358d1d6-67cb-48dd-adc4-14a1c24959cd\") " pod="kube-system/kube-proxy-8xb6q"
	Oct 02 21:26:02 test-preload-731213 kubelet[760]: I1002 21:26:02.269077     760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2226ccb-38f5-4a07-a914-723b65c45e19-xtables-lock\") pod \"kindnet-sgbmf\" (UID: \"d2226ccb-38f5-4a07-a914-723b65c45e19\") " pod="kube-system/kindnet-sgbmf"
	Oct 02 21:26:02 test-preload-731213 kubelet[760]: I1002 21:26:02.269214     760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d2226ccb-38f5-4a07-a914-723b65c45e19-cni-cfg\") pod \"kindnet-sgbmf\" (UID: \"d2226ccb-38f5-4a07-a914-723b65c45e19\") " pod="kube-system/kindnet-sgbmf"
	Oct 02 21:26:02 test-preload-731213 kubelet[760]: I1002 21:26:02.270939     760 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2600466e-6bfb-4586-a689-8e852e74b923-tmp\") pod \"storage-provisioner\" (UID: \"2600466e-6bfb-4586-a689-8e852e74b923\") " pod="kube-system/storage-provisioner"
	Oct 02 21:26:02 test-preload-731213 kubelet[760]: I1002 21:26:02.303997     760 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 21:26:02 test-preload-731213 kubelet[760]: W1002 21:26:02.418633     760 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/983787584c2c64dcf52e38f72a67d4ce3c220f51d6562eabcd80f74af551ed79/crio-acc0e3a44dd047749bea7928bc06f0aa6f397a6d194b6f187cdc23d1d416c51d WatchSource:0}: Error finding container acc0e3a44dd047749bea7928bc06f0aa6f397a6d194b6f187cdc23d1d416c51d: Status 404 returned error can't find the container with id acc0e3a44dd047749bea7928bc06f0aa6f397a6d194b6f187cdc23d1d416c51d
	Oct 02 21:26:02 test-preload-731213 kubelet[760]: W1002 21:26:02.436247     760 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/983787584c2c64dcf52e38f72a67d4ce3c220f51d6562eabcd80f74af551ed79/crio-8c9c6568ea1dc4314893732432aa5e3be369cd07da43538e176d17dd477bb624 WatchSource:0}: Error finding container 8c9c6568ea1dc4314893732432aa5e3be369cd07da43538e176d17dd477bb624: Status 404 returned error can't find the container with id 8c9c6568ea1dc4314893732432aa5e3be369cd07da43538e176d17dd477bb624
	Oct 02 21:26:02 test-preload-731213 kubelet[760]: W1002 21:26:02.445876     760 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/983787584c2c64dcf52e38f72a67d4ce3c220f51d6562eabcd80f74af551ed79/crio-afe4535129970b72a3e7788db1deb2d291977e4fbb55cd8c0d37ef3611283b6b WatchSource:0}: Error finding container afe4535129970b72a3e7788db1deb2d291977e4fbb55cd8c0d37ef3611283b6b: Status 404 returned error can't find the container with id afe4535129970b72a3e7788db1deb2d291977e4fbb55cd8c0d37ef3611283b6b
	Oct 02 21:26:02 test-preload-731213 kubelet[760]: W1002 21:26:02.494580     760 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/983787584c2c64dcf52e38f72a67d4ce3c220f51d6562eabcd80f74af551ed79/crio-6ff185b23c5708d0d3c01dd0367fedcf0a7ba1814551179000c218e26458aaf2 WatchSource:0}: Error finding container 6ff185b23c5708d0d3c01dd0367fedcf0a7ba1814551179000c218e26458aaf2: Status 404 returned error can't find the container with id 6ff185b23c5708d0d3c01dd0367fedcf0a7ba1814551179000c218e26458aaf2
	Oct 02 21:26:11 test-preload-731213 kubelet[760]: I1002 21:26:11.841322     760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 21:26:33 test-preload-731213 kubelet[760]: I1002 21:26:33.266284     760 scope.go:117] "RemoveContainer" containerID="432e0490394571b1ff0ca3776849d14430691de729e898da7943170904da8bf4"
	
	
	==> storage-provisioner [3b45cc95bf468ccf7d151508d667eaad77e34a51610cc1f180696ffc028a5102] <==
	I1002 21:26:33.307735       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 21:26:33.320982       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 21:26:33.321029       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 21:26:50.717635       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:26:50.717806       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_test-preload-731213_906e54bf-38cf-49f8-917b-e5d1c8f846b2!
	I1002 21:26:50.718658       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"500cd277-7bad-434f-be38-d8df17ab1a2b", APIVersion:"v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test-preload-731213_906e54bf-38cf-49f8-917b-e5d1c8f846b2 became leader
	I1002 21:26:50.818652       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_test-preload-731213_906e54bf-38cf-49f8-917b-e5d1c8f846b2!
	
	
	==> storage-provisioner [432e0490394571b1ff0ca3776849d14430691de729e898da7943170904da8bf4] <==
	I1002 21:26:02.584544       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 21:26:32.586960       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p test-preload-731213 -n test-preload-731213
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-731213 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: kube-controller-manager-test-preload-731213
helpers_test.go:282: ======> post-mortem[TestPreload]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context test-preload-731213 describe pod kube-controller-manager-test-preload-731213
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context test-preload-731213 describe pod kube-controller-manager-test-preload-731213: exit status 1 (86.743995ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-controller-manager-test-preload-731213" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context test-preload-731213 describe pod kube-controller-manager-test-preload-731213: exit status 1
helpers_test.go:175: Cleaning up "test-preload-731213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-731213
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-731213: (2.303930082s)
--- FAIL: TestPreload (443.37s)

                                                
                                    
x
+
TestPause/serial/Pause (7.12s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-342805 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-342805 --alsologtostderr -v=5: exit status 80 (2.219788555s)

                                                
                                                
-- stdout --
	* Pausing node pause-342805 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:39:20.160246 1159834 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:39:20.161160 1159834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:39:20.161180 1159834 out.go:374] Setting ErrFile to fd 2...
	I1002 21:39:20.161187 1159834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:39:20.161470 1159834 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:39:20.161768 1159834 out.go:368] Setting JSON to false
	I1002 21:39:20.161798 1159834 mustload.go:65] Loading cluster: pause-342805
	I1002 21:39:20.162295 1159834 config.go:182] Loaded profile config "pause-342805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:39:20.162784 1159834 cli_runner.go:164] Run: docker container inspect pause-342805 --format={{.State.Status}}
	I1002 21:39:20.191804 1159834 host.go:66] Checking if "pause-342805" exists ...
	I1002 21:39:20.192114 1159834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:39:20.290443 1159834 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 21:39:20.278563839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:39:20.291107 1159834 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-342805 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 21:39:20.294194 1159834 out.go:179] * Pausing node pause-342805 ... 
	I1002 21:39:20.298197 1159834 host.go:66] Checking if "pause-342805" exists ...
	I1002 21:39:20.298545 1159834 ssh_runner.go:195] Run: systemctl --version
	I1002 21:39:20.298594 1159834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:20.330280 1159834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34156 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/pause-342805/id_rsa Username:docker}
	I1002 21:39:20.465593 1159834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:39:20.500418 1159834 pause.go:51] kubelet running: true
	I1002 21:39:20.500485 1159834 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:39:20.881347 1159834 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:39:20.881434 1159834 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:39:21.026028 1159834 cri.go:89] found id: "49b680a7feedf7ae93e61038594b7f0cfc7491ece78d19de67bc3d8f1b4223bb"
	I1002 21:39:21.026076 1159834 cri.go:89] found id: "fb276e882bc35cb5818102f56d74380625fe699c8e09cd27bc7b7261535c718a"
	I1002 21:39:21.026081 1159834 cri.go:89] found id: "b6f87782ca2214b01449f681d7d0c7871eca77122ed9cb74cefc6cb15c5bb08b"
	I1002 21:39:21.026085 1159834 cri.go:89] found id: "710a4b876c5a6f18185d01b4227f6a3f3959c6204416ea776c9687a2eacfb588"
	I1002 21:39:21.026089 1159834 cri.go:89] found id: "a8a81ebb8ff2ee4206f048c06e5ef7ee9a247f0e196f127cc9c4d11a54e3ec70"
	I1002 21:39:21.026092 1159834 cri.go:89] found id: "f51f346240e74219bcbf28ffade50a4f610755e03459803502dccfa987f4c32b"
	I1002 21:39:21.026095 1159834 cri.go:89] found id: "2eebf60bbf5500fe23ec8cd8315dfbc1950838a47c8aece540057eda0ca29225"
	I1002 21:39:21.026097 1159834 cri.go:89] found id: "aa126941d61c5ed3a2539d94ca024fc2b211af29a29435b3b5fe3c14e86f425e"
	I1002 21:39:21.026100 1159834 cri.go:89] found id: "3824cad09c2135ada2cfdfd835293a21e06eda6cee6ecac3ed7f1d5fc47fee58"
	I1002 21:39:21.026106 1159834 cri.go:89] found id: "a0a1466e037f111cf8a0823a7a5735c20064ba3a688f3838fc67549f7fca96bf"
	I1002 21:39:21.026110 1159834 cri.go:89] found id: "a21ccda7e71dd9290384fa4b3a2407b6db8727bdbf1eab65844a9f3072edf373"
	I1002 21:39:21.026113 1159834 cri.go:89] found id: "896a2a1f7e8154e713d114640eedbc8a8ae21d748779df0ef9a01c71f6348695"
	I1002 21:39:21.026116 1159834 cri.go:89] found id: "eaaa85998fc718f91f20fe8359d69931b92d16f630120efe6ae6ac056fe5db99"
	I1002 21:39:21.026119 1159834 cri.go:89] found id: "8c463a387ca63c6e0efb5d14b494efbc842f4b2e2c92baadbfe325c4ae902de0"
	I1002 21:39:21.026122 1159834 cri.go:89] found id: ""
	I1002 21:39:21.026241 1159834 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:39:21.041743 1159834 retry.go:31] will retry after 317.866718ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:39:21Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:39:21.360148 1159834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:39:21.377177 1159834 pause.go:51] kubelet running: false
	I1002 21:39:21.377296 1159834 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:39:21.620862 1159834 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:39:21.621018 1159834 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:39:21.703406 1159834 cri.go:89] found id: "49b680a7feedf7ae93e61038594b7f0cfc7491ece78d19de67bc3d8f1b4223bb"
	I1002 21:39:21.703478 1159834 cri.go:89] found id: "fb276e882bc35cb5818102f56d74380625fe699c8e09cd27bc7b7261535c718a"
	I1002 21:39:21.703496 1159834 cri.go:89] found id: "b6f87782ca2214b01449f681d7d0c7871eca77122ed9cb74cefc6cb15c5bb08b"
	I1002 21:39:21.703516 1159834 cri.go:89] found id: "710a4b876c5a6f18185d01b4227f6a3f3959c6204416ea776c9687a2eacfb588"
	I1002 21:39:21.703536 1159834 cri.go:89] found id: "a8a81ebb8ff2ee4206f048c06e5ef7ee9a247f0e196f127cc9c4d11a54e3ec70"
	I1002 21:39:21.703569 1159834 cri.go:89] found id: "f51f346240e74219bcbf28ffade50a4f610755e03459803502dccfa987f4c32b"
	I1002 21:39:21.703585 1159834 cri.go:89] found id: "2eebf60bbf5500fe23ec8cd8315dfbc1950838a47c8aece540057eda0ca29225"
	I1002 21:39:21.703602 1159834 cri.go:89] found id: "aa126941d61c5ed3a2539d94ca024fc2b211af29a29435b3b5fe3c14e86f425e"
	I1002 21:39:21.703633 1159834 cri.go:89] found id: "3824cad09c2135ada2cfdfd835293a21e06eda6cee6ecac3ed7f1d5fc47fee58"
	I1002 21:39:21.703660 1159834 cri.go:89] found id: "a0a1466e037f111cf8a0823a7a5735c20064ba3a688f3838fc67549f7fca96bf"
	I1002 21:39:21.703678 1159834 cri.go:89] found id: "a21ccda7e71dd9290384fa4b3a2407b6db8727bdbf1eab65844a9f3072edf373"
	I1002 21:39:21.703695 1159834 cri.go:89] found id: "896a2a1f7e8154e713d114640eedbc8a8ae21d748779df0ef9a01c71f6348695"
	I1002 21:39:21.703730 1159834 cri.go:89] found id: "eaaa85998fc718f91f20fe8359d69931b92d16f630120efe6ae6ac056fe5db99"
	I1002 21:39:21.703747 1159834 cri.go:89] found id: "8c463a387ca63c6e0efb5d14b494efbc842f4b2e2c92baadbfe325c4ae902de0"
	I1002 21:39:21.703763 1159834 cri.go:89] found id: ""
	I1002 21:39:21.703844 1159834 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:39:21.715794 1159834 retry.go:31] will retry after 208.336883ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:39:21Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:39:21.925336 1159834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:39:21.939646 1159834 pause.go:51] kubelet running: false
	I1002 21:39:21.939714 1159834 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:39:22.162724 1159834 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:39:22.162861 1159834 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:39:22.252829 1159834 cri.go:89] found id: "49b680a7feedf7ae93e61038594b7f0cfc7491ece78d19de67bc3d8f1b4223bb"
	I1002 21:39:22.252884 1159834 cri.go:89] found id: "fb276e882bc35cb5818102f56d74380625fe699c8e09cd27bc7b7261535c718a"
	I1002 21:39:22.252889 1159834 cri.go:89] found id: "b6f87782ca2214b01449f681d7d0c7871eca77122ed9cb74cefc6cb15c5bb08b"
	I1002 21:39:22.252894 1159834 cri.go:89] found id: "710a4b876c5a6f18185d01b4227f6a3f3959c6204416ea776c9687a2eacfb588"
	I1002 21:39:22.252911 1159834 cri.go:89] found id: "a8a81ebb8ff2ee4206f048c06e5ef7ee9a247f0e196f127cc9c4d11a54e3ec70"
	I1002 21:39:22.252919 1159834 cri.go:89] found id: "f51f346240e74219bcbf28ffade50a4f610755e03459803502dccfa987f4c32b"
	I1002 21:39:22.252928 1159834 cri.go:89] found id: "2eebf60bbf5500fe23ec8cd8315dfbc1950838a47c8aece540057eda0ca29225"
	I1002 21:39:22.252945 1159834 cri.go:89] found id: "aa126941d61c5ed3a2539d94ca024fc2b211af29a29435b3b5fe3c14e86f425e"
	I1002 21:39:22.252952 1159834 cri.go:89] found id: "3824cad09c2135ada2cfdfd835293a21e06eda6cee6ecac3ed7f1d5fc47fee58"
	I1002 21:39:22.252971 1159834 cri.go:89] found id: "a0a1466e037f111cf8a0823a7a5735c20064ba3a688f3838fc67549f7fca96bf"
	I1002 21:39:22.252975 1159834 cri.go:89] found id: "a21ccda7e71dd9290384fa4b3a2407b6db8727bdbf1eab65844a9f3072edf373"
	I1002 21:39:22.252978 1159834 cri.go:89] found id: "896a2a1f7e8154e713d114640eedbc8a8ae21d748779df0ef9a01c71f6348695"
	I1002 21:39:22.252981 1159834 cri.go:89] found id: "eaaa85998fc718f91f20fe8359d69931b92d16f630120efe6ae6ac056fe5db99"
	I1002 21:39:22.252987 1159834 cri.go:89] found id: "8c463a387ca63c6e0efb5d14b494efbc842f4b2e2c92baadbfe325c4ae902de0"
	I1002 21:39:22.252992 1159834 cri.go:89] found id: ""
	I1002 21:39:22.253059 1159834 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:39:22.273103 1159834 out.go:203] 
	W1002 21:39:22.275978 1159834 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:39:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:39:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:39:22.276000 1159834 out.go:285] * 
	* 
	W1002 21:39:22.285758 1159834 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:39:22.288838 1159834 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-342805 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-342805
helpers_test.go:243: (dbg) docker inspect pause-342805:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "530b4c7e0490615479203ddfef0f069329f6f4019ae6db34e6338cf6a940ad62",
	        "Created": "2025-10-02T21:37:45.475135327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1154611,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:37:45.539913308Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/530b4c7e0490615479203ddfef0f069329f6f4019ae6db34e6338cf6a940ad62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/530b4c7e0490615479203ddfef0f069329f6f4019ae6db34e6338cf6a940ad62/hostname",
	        "HostsPath": "/var/lib/docker/containers/530b4c7e0490615479203ddfef0f069329f6f4019ae6db34e6338cf6a940ad62/hosts",
	        "LogPath": "/var/lib/docker/containers/530b4c7e0490615479203ddfef0f069329f6f4019ae6db34e6338cf6a940ad62/530b4c7e0490615479203ddfef0f069329f6f4019ae6db34e6338cf6a940ad62-json.log",
	        "Name": "/pause-342805",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-342805:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-342805",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "530b4c7e0490615479203ddfef0f069329f6f4019ae6db34e6338cf6a940ad62",
	                "LowerDir": "/var/lib/docker/overlay2/82010b3fc00cb2e0297506621d71ee1fbe5ea97de7fdcb4b8582c2487c6ace5a-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/82010b3fc00cb2e0297506621d71ee1fbe5ea97de7fdcb4b8582c2487c6ace5a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/82010b3fc00cb2e0297506621d71ee1fbe5ea97de7fdcb4b8582c2487c6ace5a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/82010b3fc00cb2e0297506621d71ee1fbe5ea97de7fdcb4b8582c2487c6ace5a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-342805",
	                "Source": "/var/lib/docker/volumes/pause-342805/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-342805",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-342805",
	                "name.minikube.sigs.k8s.io": "pause-342805",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "acc0608eff5b2cd2c1b69653ec74f2fd34f6f6a6cc12cb0d93789c581113fa25",
	            "SandboxKey": "/var/run/docker/netns/acc0608eff5b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34156"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34157"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34160"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34158"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34159"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-342805": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:fb:4c:f8:a0:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "81494d96e03b293c61c1cdd8f7783b34a1a7a1b0c4b109a75e93b51a5ab06b80",
	                    "EndpointID": "52f413570282fbeb723b63b15fdc77cf0c9740028c46017e3692df79124d777f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-342805",
	                        "530b4c7e0490"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-342805 -n pause-342805
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-342805 -n pause-342805: exit status 2 (422.10428ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-342805 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-342805 logs -n 25: (1.633187665s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-222907 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:33 UTC │ 02 Oct 25 21:34 UTC │
	│ start   │ -p missing-upgrade-192196 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-192196    │ jenkins │ v1.32.0 │ 02 Oct 25 21:33 UTC │ 02 Oct 25 21:34 UTC │
	│ start   │ -p NoKubernetes-222907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │ 02 Oct 25 21:34 UTC │
	│ delete  │ -p NoKubernetes-222907                                                                                                                   │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │ 02 Oct 25 21:34 UTC │
	│ start   │ -p NoKubernetes-222907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │ 02 Oct 25 21:34 UTC │
	│ start   │ -p missing-upgrade-192196 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-192196    │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │ 02 Oct 25 21:35 UTC │
	│ ssh     │ -p NoKubernetes-222907 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │                     │
	│ stop    │ -p NoKubernetes-222907                                                                                                                   │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │ 02 Oct 25 21:34 UTC │
	│ start   │ -p NoKubernetes-222907 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │ 02 Oct 25 21:35 UTC │
	│ ssh     │ -p NoKubernetes-222907 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:35 UTC │                     │
	│ delete  │ -p NoKubernetes-222907                                                                                                                   │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:35 UTC │ 02 Oct 25 21:35 UTC │
	│ start   │ -p kubernetes-upgrade-840583 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-840583 │ jenkins │ v1.37.0 │ 02 Oct 25 21:35 UTC │ 02 Oct 25 21:35 UTC │
	│ delete  │ -p missing-upgrade-192196                                                                                                                │ missing-upgrade-192196    │ jenkins │ v1.37.0 │ 02 Oct 25 21:35 UTC │ 02 Oct 25 21:35 UTC │
	│ start   │ -p stopped-upgrade-678661 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-678661    │ jenkins │ v1.32.0 │ 02 Oct 25 21:35 UTC │ 02 Oct 25 21:36 UTC │
	│ stop    │ -p kubernetes-upgrade-840583                                                                                                             │ kubernetes-upgrade-840583 │ jenkins │ v1.37.0 │ 02 Oct 25 21:35 UTC │ 02 Oct 25 21:35 UTC │
	│ start   │ -p kubernetes-upgrade-840583 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-840583 │ jenkins │ v1.37.0 │ 02 Oct 25 21:35 UTC │                     │
	│ stop    │ stopped-upgrade-678661 stop                                                                                                              │ stopped-upgrade-678661    │ jenkins │ v1.32.0 │ 02 Oct 25 21:36 UTC │ 02 Oct 25 21:36 UTC │
	│ start   │ -p stopped-upgrade-678661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-678661    │ jenkins │ v1.37.0 │ 02 Oct 25 21:36 UTC │ 02 Oct 25 21:36 UTC │
	│ delete  │ -p stopped-upgrade-678661                                                                                                                │ stopped-upgrade-678661    │ jenkins │ v1.37.0 │ 02 Oct 25 21:36 UTC │ 02 Oct 25 21:36 UTC │
	│ start   │ -p running-upgrade-497263 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-497263    │ jenkins │ v1.32.0 │ 02 Oct 25 21:36 UTC │ 02 Oct 25 21:37 UTC │
	│ start   │ -p running-upgrade-497263 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-497263    │ jenkins │ v1.37.0 │ 02 Oct 25 21:37 UTC │ 02 Oct 25 21:37 UTC │
	│ delete  │ -p running-upgrade-497263                                                                                                                │ running-upgrade-497263    │ jenkins │ v1.37.0 │ 02 Oct 25 21:37 UTC │ 02 Oct 25 21:37 UTC │
	│ start   │ -p pause-342805 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-342805              │ jenkins │ v1.37.0 │ 02 Oct 25 21:37 UTC │ 02 Oct 25 21:39 UTC │
	│ start   │ -p pause-342805 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-342805              │ jenkins │ v1.37.0 │ 02 Oct 25 21:39 UTC │ 02 Oct 25 21:39 UTC │
	│ pause   │ -p pause-342805 --alsologtostderr -v=5                                                                                                   │ pause-342805              │ jenkins │ v1.37.0 │ 02 Oct 25 21:39 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:39:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:39:01.620590 1158612 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:39:01.622726 1158612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:39:01.622783 1158612 out.go:374] Setting ErrFile to fd 2...
	I1002 21:39:01.622804 1158612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:39:01.623263 1158612 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:39:01.623897 1158612 out.go:368] Setting JSON to false
	I1002 21:39:01.625062 1158612 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":22879,"bootTime":1759418263,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:39:01.629831 1158612 start.go:140] virtualization:  
	I1002 21:39:01.634483 1158612 out.go:179] * [pause-342805] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:39:01.638467 1158612 notify.go:221] Checking for updates...
	I1002 21:39:01.639874 1158612 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:39:01.643032 1158612 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:39:01.646696 1158612 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:39:01.649949 1158612 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:39:01.652987 1158612 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:39:01.656399 1158612 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:39:01.659984 1158612 config.go:182] Loaded profile config "pause-342805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:39:01.660890 1158612 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:39:01.689159 1158612 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:39:01.689295 1158612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:39:01.804453 1158612 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 21:39:01.794012015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:39:01.804566 1158612 docker.go:319] overlay module found
	I1002 21:39:01.807769 1158612 out.go:179] * Using the docker driver based on existing profile
	I1002 21:39:01.810645 1158612 start.go:306] selected driver: docker
	I1002 21:39:01.810667 1158612 start.go:936] validating driver "docker" against &{Name:pause-342805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-342805 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:39:01.810799 1158612 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:39:01.810898 1158612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:39:01.870092 1158612 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 21:39:01.861252268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:39:01.870508 1158612 cni.go:84] Creating CNI manager for ""
	I1002 21:39:01.870574 1158612 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:39:01.870622 1158612 start.go:350] cluster config:
	{Name:pause-342805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-342805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:39:01.875493 1158612 out.go:179] * Starting "pause-342805" primary control-plane node in "pause-342805" cluster
	I1002 21:39:01.878163 1158612 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:39:01.881023 1158612 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:39:01.883673 1158612 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:39:01.883732 1158612 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:39:01.883748 1158612 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:39:01.883758 1158612 cache.go:59] Caching tarball of preloaded images
	I1002 21:39:01.883838 1158612 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:39:01.883890 1158612 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:39:01.884025 1158612 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/config.json ...
	I1002 21:39:01.906379 1158612 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:39:01.906404 1158612 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:39:01.906416 1158612 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:39:01.906437 1158612 start.go:361] acquireMachinesLock for pause-342805: {Name:mk9a324cf34d97a2bca3e9378b685e5bb3f5cda9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:39:01.906493 1158612 start.go:365] duration metric: took 34.838µs to acquireMachinesLock for "pause-342805"
	I1002 21:39:01.906523 1158612 start.go:97] Skipping create...Using existing machine configuration
	I1002 21:39:01.906532 1158612 fix.go:55] fixHost starting: 
	I1002 21:39:01.906775 1158612 cli_runner.go:164] Run: docker container inspect pause-342805 --format={{.State.Status}}
	I1002 21:39:01.924170 1158612 fix.go:113] recreateIfNeeded on pause-342805: state=Running err=<nil>
	W1002 21:39:01.924200 1158612 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 21:39:01.231891 1144154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 21:39:01.231945 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:39:01.232022 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:39:01.267606 1144154 cri.go:89] found id: "ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46"
	I1002 21:39:01.267631 1144154 cri.go:89] found id: "6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2"
	I1002 21:39:01.267636 1144154 cri.go:89] found id: ""
	I1002 21:39:01.267644 1144154 logs.go:282] 2 containers: [ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2]
	I1002 21:39:01.267708 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:01.272762 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:01.276865 1144154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:39:01.276948 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:39:01.310485 1144154 cri.go:89] found id: ""
	I1002 21:39:01.310514 1144154 logs.go:282] 0 containers: []
	W1002 21:39:01.310524 1144154 logs.go:284] No container was found matching "etcd"
	I1002 21:39:01.310530 1144154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:39:01.310594 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:39:01.350146 1144154 cri.go:89] found id: ""
	I1002 21:39:01.350171 1144154 logs.go:282] 0 containers: []
	W1002 21:39:01.350182 1144154 logs.go:284] No container was found matching "coredns"
	I1002 21:39:01.350188 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:39:01.350257 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:39:01.384024 1144154 cri.go:89] found id: "5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe"
	I1002 21:39:01.384069 1144154 cri.go:89] found id: ""
	I1002 21:39:01.384093 1144154 logs.go:282] 1 containers: [5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe]
	I1002 21:39:01.384149 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:01.389157 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:39:01.389319 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:39:01.422635 1144154 cri.go:89] found id: ""
	I1002 21:39:01.422663 1144154 logs.go:282] 0 containers: []
	W1002 21:39:01.422679 1144154 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:39:01.422689 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:39:01.422798 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:39:01.478749 1144154 cri.go:89] found id: "3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35"
	I1002 21:39:01.478769 1144154 cri.go:89] found id: ""
	I1002 21:39:01.478777 1144154 logs.go:282] 1 containers: [3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35]
	I1002 21:39:01.478894 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:01.483034 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:39:01.483115 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:39:01.518718 1144154 cri.go:89] found id: ""
	I1002 21:39:01.518740 1144154 logs.go:282] 0 containers: []
	W1002 21:39:01.518749 1144154 logs.go:284] No container was found matching "kindnet"
	I1002 21:39:01.518756 1144154 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 21:39:01.518813 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 21:39:01.571185 1144154 cri.go:89] found id: ""
	I1002 21:39:01.571209 1144154 logs.go:282] 0 containers: []
	W1002 21:39:01.571217 1144154 logs.go:284] No container was found matching "storage-provisioner"
	I1002 21:39:01.571230 1144154 logs.go:123] Gathering logs for kube-controller-manager [3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35] ...
	I1002 21:39:01.571242 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35"
	I1002 21:39:01.612118 1144154 logs.go:123] Gathering logs for container status ...
	I1002 21:39:01.612149 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:39:01.667849 1144154 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:39:01.667874 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 21:39:01.927443 1158612 out.go:252] * Updating the running docker "pause-342805" container ...
	I1002 21:39:01.927478 1158612 machine.go:93] provisionDockerMachine start ...
	I1002 21:39:01.927573 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:01.944408 1158612 main.go:141] libmachine: Using SSH client type: native
	I1002 21:39:01.944732 1158612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34156 <nil> <nil>}
	I1002 21:39:01.944752 1158612 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:39:02.077819 1158612 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-342805
	
	I1002 21:39:02.077852 1158612 ubuntu.go:182] provisioning hostname "pause-342805"
	I1002 21:39:02.077918 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:02.096997 1158612 main.go:141] libmachine: Using SSH client type: native
	I1002 21:39:02.097320 1158612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34156 <nil> <nil>}
	I1002 21:39:02.097336 1158612 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-342805 && echo "pause-342805" | sudo tee /etc/hostname
	I1002 21:39:02.244143 1158612 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-342805
	
	I1002 21:39:02.244223 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:02.264010 1158612 main.go:141] libmachine: Using SSH client type: native
	I1002 21:39:02.264386 1158612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34156 <nil> <nil>}
	I1002 21:39:02.264408 1158612 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-342805' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-342805/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-342805' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:39:02.406532 1158612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:39:02.406564 1158612 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:39:02.406586 1158612 ubuntu.go:190] setting up certificates
	I1002 21:39:02.406595 1158612 provision.go:84] configureAuth start
	I1002 21:39:02.406654 1158612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-342805
	I1002 21:39:02.424230 1158612 provision.go:143] copyHostCerts
	I1002 21:39:02.424301 1158612 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:39:02.424316 1158612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:39:02.424390 1158612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:39:02.424494 1158612 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:39:02.424499 1158612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:39:02.424526 1158612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:39:02.424585 1158612 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:39:02.424589 1158612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:39:02.424612 1158612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:39:02.424668 1158612 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.pause-342805 san=[127.0.0.1 192.168.85.2 localhost minikube pause-342805]
	I1002 21:39:02.787585 1158612 provision.go:177] copyRemoteCerts
	I1002 21:39:02.787653 1158612 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:39:02.787706 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:02.807816 1158612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34156 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/pause-342805/id_rsa Username:docker}
	I1002 21:39:02.905578 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:39:02.923251 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 21:39:02.942150 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:39:02.958817 1158612 provision.go:87] duration metric: took 552.209025ms to configureAuth
	I1002 21:39:02.958842 1158612 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:39:02.959069 1158612 config.go:182] Loaded profile config "pause-342805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:39:02.959173 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:02.981897 1158612 main.go:141] libmachine: Using SSH client type: native
	I1002 21:39:02.982311 1158612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34156 <nil> <nil>}
	I1002 21:39:02.982335 1158612 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:39:08.300579 1158612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:39:08.300616 1158612 machine.go:96] duration metric: took 6.373123033s to provisionDockerMachine
	I1002 21:39:08.300628 1158612 start.go:294] postStartSetup for "pause-342805" (driver="docker")
	I1002 21:39:08.300639 1158612 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:39:08.300746 1158612 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:39:08.300798 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:08.320458 1158612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34156 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/pause-342805/id_rsa Username:docker}
	I1002 21:39:08.418817 1158612 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:39:08.422477 1158612 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:39:08.422507 1158612 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:39:08.422519 1158612 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:39:08.422592 1158612 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:39:08.422728 1158612 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:39:08.422854 1158612 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:39:08.430404 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:39:08.448141 1158612 start.go:297] duration metric: took 147.496603ms for postStartSetup
	I1002 21:39:08.448223 1158612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:39:08.448266 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:08.465698 1158612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34156 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/pause-342805/id_rsa Username:docker}
	I1002 21:39:08.559660 1158612 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:39:08.565213 1158612 fix.go:57] duration metric: took 6.658670812s for fixHost
	I1002 21:39:08.565236 1158612 start.go:84] releasing machines lock for "pause-342805", held for 6.658730478s
	I1002 21:39:08.565324 1158612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-342805
	I1002 21:39:08.582893 1158612 ssh_runner.go:195] Run: cat /version.json
	I1002 21:39:08.582944 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:08.582956 1158612 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:39:08.583016 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:08.607279 1158612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34156 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/pause-342805/id_rsa Username:docker}
	I1002 21:39:08.610017 1158612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34156 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/pause-342805/id_rsa Username:docker}
	I1002 21:39:08.701733 1158612 ssh_runner.go:195] Run: systemctl --version
	I1002 21:39:08.793935 1158612 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:39:08.834447 1158612 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:39:08.840046 1158612 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:39:08.840129 1158612 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:39:08.848323 1158612 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:39:08.848348 1158612 start.go:496] detecting cgroup driver to use...
	I1002 21:39:08.848410 1158612 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:39:08.848473 1158612 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:39:08.863485 1158612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:39:08.876582 1158612 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:39:08.876673 1158612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:39:08.892467 1158612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:39:08.906271 1158612 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:39:09.043784 1158612 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:39:09.167348 1158612 docker.go:234] disabling docker service ...
	I1002 21:39:09.167456 1158612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:39:09.183050 1158612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:39:09.196898 1158612 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:39:09.338825 1158612 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:39:09.475783 1158612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:39:09.489689 1158612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:39:09.504098 1158612 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:39:09.504251 1158612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:09.513501 1158612 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:39:09.513624 1158612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:09.523834 1158612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:09.533734 1158612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:09.543829 1158612 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:39:09.552400 1158612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:09.561393 1158612 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:09.569689 1158612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:09.578387 1158612 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:39:09.585952 1158612 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:39:09.593318 1158612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:39:09.722456 1158612 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:39:09.891619 1158612 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:39:09.891690 1158612 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:39:09.895724 1158612 start.go:564] Will wait 60s for crictl version
	I1002 21:39:09.895828 1158612 ssh_runner.go:195] Run: which crictl
	I1002 21:39:09.899234 1158612 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:39:09.927571 1158612 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:39:09.927702 1158612 ssh_runner.go:195] Run: crio --version
	I1002 21:39:09.956794 1158612 ssh_runner.go:195] Run: crio --version
	I1002 21:39:09.990430 1158612 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:39:09.993406 1158612 cli_runner.go:164] Run: docker network inspect pause-342805 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:39:10.015273 1158612 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 21:39:10.020073 1158612 kubeadm.go:883] updating cluster {Name:pause-342805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-342805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:39:10.020246 1158612 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:39:10.020309 1158612 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:39:10.055993 1158612 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:39:10.056022 1158612 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:39:10.056084 1158612 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:39:10.083881 1158612 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:39:10.083911 1158612 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:39:10.083920 1158612 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 21:39:10.084045 1158612 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-342805 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-342805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:39:10.084140 1158612 ssh_runner.go:195] Run: crio config
	I1002 21:39:10.151260 1158612 cni.go:84] Creating CNI manager for ""
	I1002 21:39:10.151285 1158612 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:39:10.151302 1158612 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:39:10.151356 1158612 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-342805 NodeName:pause-342805 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:39:10.151547 1158612 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-342805"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:39:10.151640 1158612 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:39:10.159884 1158612 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:39:10.159995 1158612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:39:10.167939 1158612 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1002 21:39:10.181366 1158612 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:39:10.194948 1158612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1002 21:39:10.208224 1158612 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:39:10.212464 1158612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:39:10.340506 1158612 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:39:10.354479 1158612 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805 for IP: 192.168.85.2
	I1002 21:39:10.354498 1158612 certs.go:195] generating shared ca certs ...
	I1002 21:39:10.354513 1158612 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:39:10.354700 1158612 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:39:10.354767 1158612 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:39:10.354782 1158612 certs.go:257] generating profile certs ...
	I1002 21:39:10.354889 1158612 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/client.key
	I1002 21:39:10.354957 1158612 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/apiserver.key.7baa9c76
	I1002 21:39:10.355020 1158612 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/proxy-client.key
	I1002 21:39:10.355165 1158612 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:39:10.355218 1158612 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:39:10.355234 1158612 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:39:10.355259 1158612 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:39:10.355311 1158612 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:39:10.355345 1158612 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:39:10.355416 1158612 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:39:10.356015 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:39:10.375784 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:39:10.393508 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:39:10.411551 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:39:10.428609 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 21:39:10.446006 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:39:10.463422 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:39:10.480577 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:39:10.497745 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:39:10.516902 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:39:10.533910 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:39:10.550797 1158612 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:39:10.563142 1158612 ssh_runner.go:195] Run: openssl version
	I1002 21:39:10.569066 1158612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:39:10.577156 1158612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:39:10.580982 1158612 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:39:10.581099 1158612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:39:10.621963 1158612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:39:10.630168 1158612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:39:10.638585 1158612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:39:10.642404 1158612 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:39:10.642480 1158612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:39:10.685104 1158612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:39:10.693264 1158612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:39:10.701672 1158612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:39:10.705485 1158612 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:39:10.705559 1158612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:39:10.748667 1158612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:39:10.761305 1158612 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:39:10.765452 1158612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:39:10.806831 1158612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:39:10.847562 1158612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:39:10.888664 1158612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:39:10.929745 1158612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:39:10.971075 1158612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:39:11.012518 1158612 kubeadm.go:400] StartCluster: {Name:pause-342805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-342805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:39:11.012639 1158612 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:39:11.012707 1158612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:39:11.042927 1158612 cri.go:89] found id: "aa126941d61c5ed3a2539d94ca024fc2b211af29a29435b3b5fe3c14e86f425e"
	I1002 21:39:11.042949 1158612 cri.go:89] found id: "3824cad09c2135ada2cfdfd835293a21e06eda6cee6ecac3ed7f1d5fc47fee58"
	I1002 21:39:11.042955 1158612 cri.go:89] found id: "a0a1466e037f111cf8a0823a7a5735c20064ba3a688f3838fc67549f7fca96bf"
	I1002 21:39:11.042958 1158612 cri.go:89] found id: "a21ccda7e71dd9290384fa4b3a2407b6db8727bdbf1eab65844a9f3072edf373"
	I1002 21:39:11.042961 1158612 cri.go:89] found id: "896a2a1f7e8154e713d114640eedbc8a8ae21d748779df0ef9a01c71f6348695"
	I1002 21:39:11.042964 1158612 cri.go:89] found id: "eaaa85998fc718f91f20fe8359d69931b92d16f630120efe6ae6ac056fe5db99"
	I1002 21:39:11.042967 1158612 cri.go:89] found id: "8c463a387ca63c6e0efb5d14b494efbc842f4b2e2c92baadbfe325c4ae902de0"
	I1002 21:39:11.042992 1158612 cri.go:89] found id: ""
	I1002 21:39:11.043052 1158612 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 21:39:11.054141 1158612 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:39:11Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:39:11.054234 1158612 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:39:11.063006 1158612 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:39:11.063027 1158612 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:39:11.063095 1158612 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:39:11.077841 1158612 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:39:11.078563 1158612 kubeconfig.go:125] found "pause-342805" server: "https://192.168.85.2:8443"
	I1002 21:39:11.079456 1158612 kapi.go:59] client config for pause-342805: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/client.key", CAFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:39:11.079965 1158612 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 21:39:11.079985 1158612 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 21:39:11.079991 1158612 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 21:39:11.079996 1158612 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 21:39:11.080002 1158612 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 21:39:11.080264 1158612 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:39:11.088772 1158612 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 21:39:11.088811 1158612 kubeadm.go:601] duration metric: took 25.774852ms to restartPrimaryControlPlane
	I1002 21:39:11.088852 1158612 kubeadm.go:402] duration metric: took 76.312277ms to StartCluster
	I1002 21:39:11.088879 1158612 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:39:11.088968 1158612 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:39:11.089930 1158612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:39:11.090242 1158612 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:39:11.090566 1158612 config.go:182] Loaded profile config "pause-342805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:39:11.090623 1158612 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:39:11.095086 1158612 out.go:179] * Verifying Kubernetes components...
	I1002 21:39:11.095083 1158612 out.go:179] * Enabled addons: 
	I1002 21:39:11.097923 1158612 addons.go:514] duration metric: took 7.288104ms for enable addons: enabled=[]
	I1002 21:39:11.098019 1158612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:39:11.238139 1158612 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:39:11.252553 1158612 node_ready.go:35] waiting up to 6m0s for node "pause-342805" to be "Ready" ...
	I1002 21:39:11.785798 1144154 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.117899668s)
	W1002 21:39:11.785837 1144154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1002 21:39:11.785845 1144154 logs.go:123] Gathering logs for kube-apiserver [6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2] ...
	I1002 21:39:11.785855 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2"
	I1002 21:39:11.849845 1144154 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:39:11.849920 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:39:11.949533 1144154 logs.go:123] Gathering logs for kubelet ...
	I1002 21:39:11.949628 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:39:12.094450 1144154 logs.go:123] Gathering logs for dmesg ...
	I1002 21:39:12.094543 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 21:39:12.116808 1144154 logs.go:123] Gathering logs for kube-apiserver [ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46] ...
	I1002 21:39:12.116833 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46"
	I1002 21:39:12.177911 1144154 logs.go:123] Gathering logs for kube-scheduler [5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe] ...
	I1002 21:39:12.177991 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe"
	I1002 21:39:14.768347 1144154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:39:16.771216 1144154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:55690->192.168.76.2:8443: read: connection reset by peer
	I1002 21:39:16.771262 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:39:16.771317 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:39:16.820783 1144154 cri.go:89] found id: "ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46"
	I1002 21:39:16.820801 1144154 cri.go:89] found id: "6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2"
	I1002 21:39:16.820805 1144154 cri.go:89] found id: ""
	I1002 21:39:16.820813 1144154 logs.go:282] 2 containers: [ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2]
	I1002 21:39:16.820870 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:16.824554 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:16.828164 1144154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:39:16.828225 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:39:16.868549 1144154 cri.go:89] found id: ""
	I1002 21:39:16.868567 1144154 logs.go:282] 0 containers: []
	W1002 21:39:16.868574 1144154 logs.go:284] No container was found matching "etcd"
	I1002 21:39:16.868580 1144154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:39:16.868720 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:39:16.908615 1144154 cri.go:89] found id: ""
	I1002 21:39:16.908635 1144154 logs.go:282] 0 containers: []
	W1002 21:39:16.908643 1144154 logs.go:284] No container was found matching "coredns"
	I1002 21:39:16.908659 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:39:16.908747 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:39:16.944423 1144154 cri.go:89] found id: "5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe"
	I1002 21:39:16.944446 1144154 cri.go:89] found id: ""
	I1002 21:39:16.944455 1144154 logs.go:282] 1 containers: [5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe]
	I1002 21:39:16.944519 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:16.953255 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:39:16.953337 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:39:17.010579 1144154 cri.go:89] found id: ""
	I1002 21:39:17.010601 1144154 logs.go:282] 0 containers: []
	W1002 21:39:17.010609 1144154 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:39:17.010615 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:39:17.010680 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:39:17.048370 1144154 cri.go:89] found id: "3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35"
	I1002 21:39:17.048450 1144154 cri.go:89] found id: ""
	I1002 21:39:17.048473 1144154 logs.go:282] 1 containers: [3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35]
	I1002 21:39:17.048578 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:17.053561 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:39:17.053629 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:39:17.099327 1144154 cri.go:89] found id: ""
	I1002 21:39:17.099402 1144154 logs.go:282] 0 containers: []
	W1002 21:39:17.099437 1144154 logs.go:284] No container was found matching "kindnet"
	I1002 21:39:17.099463 1144154 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 21:39:17.099553 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 21:39:17.135380 1144154 cri.go:89] found id: ""
	I1002 21:39:17.135401 1144154 logs.go:282] 0 containers: []
	W1002 21:39:17.135409 1144154 logs.go:284] No container was found matching "storage-provisioner"
	I1002 21:39:17.135421 1144154 logs.go:123] Gathering logs for kube-scheduler [5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe] ...
	I1002 21:39:17.135433 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe"
	I1002 21:39:17.201552 1144154 logs.go:123] Gathering logs for kube-controller-manager [3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35] ...
	I1002 21:39:17.201651 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35"
	I1002 21:39:17.237788 1144154 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:39:17.237865 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:39:17.316034 1144154 logs.go:123] Gathering logs for container status ...
	I1002 21:39:17.316067 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:39:17.363209 1144154 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:39:17.363235 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:39:17.473408 1144154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:39:17.473424 1144154 logs.go:123] Gathering logs for kube-apiserver [ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46] ...
	I1002 21:39:17.473438 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46"
	I1002 21:39:17.528901 1144154 logs.go:123] Gathering logs for kube-apiserver [6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2] ...
	I1002 21:39:17.528979 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2"
	W1002 21:39:17.578863 1144154 logs.go:130] failed kube-apiserver [6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:39:17.575776    4012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2\": container with ID starting with 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2 not found: ID does not exist" containerID="6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2"
	time="2025-10-02T21:39:17Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2\": container with ID starting with 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1002 21:39:17.575776    4012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2\": container with ID starting with 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2 not found: ID does not exist" containerID="6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2"
	time="2025-10-02T21:39:17Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2\": container with ID starting with 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2 not found: ID does not exist"
	
	** /stderr **
	I1002 21:39:17.578884 1144154 logs.go:123] Gathering logs for kubelet ...
	I1002 21:39:17.578896 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:39:17.714220 1144154 logs.go:123] Gathering logs for dmesg ...
	I1002 21:39:17.714330 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 21:39:16.745592 1158612 node_ready.go:49] node "pause-342805" is "Ready"
	I1002 21:39:16.745616 1158612 node_ready.go:38] duration metric: took 5.493031658s for node "pause-342805" to be "Ready" ...
	I1002 21:39:16.745629 1158612 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:39:16.745689 1158612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:39:16.769979 1158612 api_server.go:72] duration metric: took 5.679702609s to wait for apiserver process to appear ...
	I1002 21:39:16.770001 1158612 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:39:16.770019 1158612 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:39:16.868020 1158612 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1002 21:39:16.868093 1158612 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1002 21:39:17.270647 1158612 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:39:17.286735 1158612 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:39:17.286822 1158612 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:39:17.770330 1158612 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:39:17.779856 1158612 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:39:17.779950 1158612 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:39:18.270122 1158612 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:39:18.279000 1158612 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 21:39:18.280319 1158612 api_server.go:141] control plane version: v1.34.1
	I1002 21:39:18.280389 1158612 api_server.go:131] duration metric: took 1.510379757s to wait for apiserver health ...
	I1002 21:39:18.280414 1158612 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:39:18.284879 1158612 system_pods.go:59] 7 kube-system pods found
	I1002 21:39:18.284971 1158612 system_pods.go:61] "coredns-66bc5c9577-wklz5" [703d8048-2e98-4139-bb02-6bf0333a3a18] Running
	I1002 21:39:18.284993 1158612 system_pods.go:61] "etcd-pause-342805" [21b56e85-81aa-470d-8401-ec24205fe60f] Running
	I1002 21:39:18.285028 1158612 system_pods.go:61] "kindnet-9p45q" [57fa9c10-a34c-4e2c-8201-e61aedf6b127] Running
	I1002 21:39:18.285053 1158612 system_pods.go:61] "kube-apiserver-pause-342805" [5c62ca30-d895-4bdd-a2eb-337f5cbeacac] Running
	I1002 21:39:18.285081 1158612 system_pods.go:61] "kube-controller-manager-pause-342805" [f56c68bd-d65f-4837-88a6-327d4b5e47ee] Running
	I1002 21:39:18.285102 1158612 system_pods.go:61] "kube-proxy-b8p7f" [198ef9a9-14fd-48d1-a6a2-e318eaa0436e] Running
	I1002 21:39:18.285142 1158612 system_pods.go:61] "kube-scheduler-pause-342805" [6432c0a0-8ce6-4963-a75a-004c0f10732c] Running
	I1002 21:39:18.285170 1158612 system_pods.go:74] duration metric: took 4.735132ms to wait for pod list to return data ...
	I1002 21:39:18.285195 1158612 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:39:18.288619 1158612 default_sa.go:45] found service account: "default"
	I1002 21:39:18.288704 1158612 default_sa.go:55] duration metric: took 3.487979ms for default service account to be created ...
	I1002 21:39:18.288749 1158612 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:39:18.298418 1158612 system_pods.go:86] 7 kube-system pods found
	I1002 21:39:18.298447 1158612 system_pods.go:89] "coredns-66bc5c9577-wklz5" [703d8048-2e98-4139-bb02-6bf0333a3a18] Running
	I1002 21:39:18.298454 1158612 system_pods.go:89] "etcd-pause-342805" [21b56e85-81aa-470d-8401-ec24205fe60f] Running
	I1002 21:39:18.298459 1158612 system_pods.go:89] "kindnet-9p45q" [57fa9c10-a34c-4e2c-8201-e61aedf6b127] Running
	I1002 21:39:18.298463 1158612 system_pods.go:89] "kube-apiserver-pause-342805" [5c62ca30-d895-4bdd-a2eb-337f5cbeacac] Running
	I1002 21:39:18.298468 1158612 system_pods.go:89] "kube-controller-manager-pause-342805" [f56c68bd-d65f-4837-88a6-327d4b5e47ee] Running
	I1002 21:39:18.298471 1158612 system_pods.go:89] "kube-proxy-b8p7f" [198ef9a9-14fd-48d1-a6a2-e318eaa0436e] Running
	I1002 21:39:18.298475 1158612 system_pods.go:89] "kube-scheduler-pause-342805" [6432c0a0-8ce6-4963-a75a-004c0f10732c] Running
	I1002 21:39:18.298482 1158612 system_pods.go:126] duration metric: took 9.698722ms to wait for k8s-apps to be running ...
	I1002 21:39:18.298489 1158612 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:39:18.298547 1158612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:39:18.316945 1158612 system_svc.go:56] duration metric: took 18.431012ms WaitForService to wait for kubelet
	I1002 21:39:18.317022 1158612 kubeadm.go:586] duration metric: took 7.22674822s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:39:18.317083 1158612 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:39:18.321767 1158612 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:39:18.321847 1158612 node_conditions.go:123] node cpu capacity is 2
	I1002 21:39:18.321876 1158612 node_conditions.go:105] duration metric: took 4.77253ms to run NodePressure ...
	I1002 21:39:18.321902 1158612 start.go:242] waiting for startup goroutines ...
	I1002 21:39:18.321938 1158612 start.go:247] waiting for cluster config update ...
	I1002 21:39:18.321964 1158612 start.go:256] writing updated cluster config ...
	I1002 21:39:18.322409 1158612 ssh_runner.go:195] Run: rm -f paused
	I1002 21:39:18.326420 1158612 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:39:18.327061 1158612 kapi.go:59] client config for pause-342805: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/client.key", CAFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:39:18.335813 1158612 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wklz5" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.347037 1158612 pod_ready.go:94] pod "coredns-66bc5c9577-wklz5" is "Ready"
	I1002 21:39:18.347064 1158612 pod_ready.go:86] duration metric: took 11.221617ms for pod "coredns-66bc5c9577-wklz5" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.349567 1158612 pod_ready.go:83] waiting for pod "etcd-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.354157 1158612 pod_ready.go:94] pod "etcd-pause-342805" is "Ready"
	I1002 21:39:18.354182 1158612 pod_ready.go:86] duration metric: took 4.592391ms for pod "etcd-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.356492 1158612 pod_ready.go:83] waiting for pod "kube-apiserver-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.360865 1158612 pod_ready.go:94] pod "kube-apiserver-pause-342805" is "Ready"
	I1002 21:39:18.360889 1158612 pod_ready.go:86] duration metric: took 4.373427ms for pod "kube-apiserver-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.363146 1158612 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.730777 1158612 pod_ready.go:94] pod "kube-controller-manager-pause-342805" is "Ready"
	I1002 21:39:18.730858 1158612 pod_ready.go:86] duration metric: took 367.68662ms for pod "kube-controller-manager-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.933183 1158612 pod_ready.go:83] waiting for pod "kube-proxy-b8p7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:19.330775 1158612 pod_ready.go:94] pod "kube-proxy-b8p7f" is "Ready"
	I1002 21:39:19.330841 1158612 pod_ready.go:86] duration metric: took 397.58717ms for pod "kube-proxy-b8p7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:19.530612 1158612 pod_ready.go:83] waiting for pod "kube-scheduler-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:19.934225 1158612 pod_ready.go:94] pod "kube-scheduler-pause-342805" is "Ready"
	I1002 21:39:19.934254 1158612 pod_ready.go:86] duration metric: took 403.617102ms for pod "kube-scheduler-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:19.934266 1158612 pod_ready.go:40] duration metric: took 1.607811964s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:39:20.015029 1158612 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:39:20.020711 1158612 out.go:179] * Done! kubectl is now configured to use "pause-342805" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.570785749Z" level=info msg="Started container" PID=2344 containerID=710a4b876c5a6f18185d01b4227f6a3f3959c6204416ea776c9687a2eacfb588 description=kube-system/kube-scheduler-pause-342805/kube-scheduler id=ea9d1743-4a10-4174-bb60-2c744cca61f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45035934e89a1607d3cd02f00958f38075d853c9c173a4af011c3dda048d92d9
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.589007791Z" level=info msg="Started container" PID=2342 containerID=a8a81ebb8ff2ee4206f048c06e5ef7ee9a247f0e196f127cc9c4d11a54e3ec70 description=kube-system/coredns-66bc5c9577-wklz5/coredns id=1229f707-e32b-46a7-afd8-7eb746dfcba4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8286aa13ff27d8de4ab63985d86645ec45ad8b82fe93de47ab432b58663349f8
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.591103586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.591623407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.608534034Z" level=info msg="Created container b6f87782ca2214b01449f681d7d0c7871eca77122ed9cb74cefc6cb15c5bb08b: kube-system/kube-apiserver-pause-342805/kube-apiserver" id=16bf2bd4-2c44-4ec1-9fd4-3cfc64caf035 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.61136064Z" level=info msg="Starting container: b6f87782ca2214b01449f681d7d0c7871eca77122ed9cb74cefc6cb15c5bb08b" id=cc20f0eb-faab-491e-a82c-a29d7ff61011 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.613649177Z" level=info msg="Started container" PID=2345 containerID=b6f87782ca2214b01449f681d7d0c7871eca77122ed9cb74cefc6cb15c5bb08b description=kube-system/kube-apiserver-pause-342805/kube-apiserver id=cc20f0eb-faab-491e-a82c-a29d7ff61011 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f932a5c0c7c3bb788c662bb3e1cd9e97fba18bc2a07494c9c2c6fe3189dbd600
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.632861837Z" level=info msg="Created container fb276e882bc35cb5818102f56d74380625fe699c8e09cd27bc7b7261535c718a: kube-system/kube-controller-manager-pause-342805/kube-controller-manager" id=1cd23463-bfb4-4808-9cfb-09f65edc0707 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.633707311Z" level=info msg="Starting container: fb276e882bc35cb5818102f56d74380625fe699c8e09cd27bc7b7261535c718a" id=758d31e5-8610-4c4c-9b92-721471569171 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.637542546Z" level=info msg="Started container" PID=2369 containerID=fb276e882bc35cb5818102f56d74380625fe699c8e09cd27bc7b7261535c718a description=kube-system/kube-controller-manager-pause-342805/kube-controller-manager id=758d31e5-8610-4c4c-9b92-721471569171 name=/runtime.v1.RuntimeService/StartContainer sandboxID=db66b4884e3c6070df946e4e4e27808c7218ee4bbbdd861da808646c0711ef9a
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.648655728Z" level=info msg="Created container 49b680a7feedf7ae93e61038594b7f0cfc7491ece78d19de67bc3d8f1b4223bb: kube-system/etcd-pause-342805/etcd" id=4a5af565-4c5a-4393-a99c-729ebc63da5d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.649363343Z" level=info msg="Starting container: 49b680a7feedf7ae93e61038594b7f0cfc7491ece78d19de67bc3d8f1b4223bb" id=2f83805b-82bc-47fc-b12c-a39091b4f5cd name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.651492351Z" level=info msg="Started container" PID=2373 containerID=49b680a7feedf7ae93e61038594b7f0cfc7491ece78d19de67bc3d8f1b4223bb description=kube-system/etcd-pause-342805/etcd id=2f83805b-82bc-47fc-b12c-a39091b4f5cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7af2ca5fa18a3e4aa13ab24679a64931f680c4888010e31cf7f087c33ef5fe8
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.818726524Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.823391209Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.82353436Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.823611782Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.827585975Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.827616374Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.827632537Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.831435872Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.831471391Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.831495767Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.835567525Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.835715492Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	49b680a7feedf       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago       Running             etcd                      1                   f7af2ca5fa18a       etcd-pause-342805                      kube-system
	fb276e882bc35       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago       Running             kube-controller-manager   1                   db66b4884e3c6       kube-controller-manager-pause-342805   kube-system
	b6f87782ca221       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago       Running             kube-apiserver            1                   f932a5c0c7c3b       kube-apiserver-pause-342805            kube-system
	710a4b876c5a6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago       Running             kube-scheduler            1                   45035934e89a1       kube-scheduler-pause-342805            kube-system
	a8a81ebb8ff2e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   11 seconds ago       Running             coredns                   1                   8286aa13ff27d       coredns-66bc5c9577-wklz5               kube-system
	f51f346240e74       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   12 seconds ago       Running             kube-proxy                1                   c22cc56af246c       kube-proxy-b8p7f                       kube-system
	2eebf60bbf550       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   12 seconds ago       Running             kindnet-cni               1                   f0951a53464f2       kindnet-9p45q                          kube-system
	aa126941d61c5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Exited              coredns                   0                   8286aa13ff27d       coredns-66bc5c9577-wklz5               kube-system
	3824cad09c213       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   f0951a53464f2       kindnet-9p45q                          kube-system
	a0a1466e037f1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   c22cc56af246c       kube-proxy-b8p7f                       kube-system
	a21ccda7e71dd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   f932a5c0c7c3b       kube-apiserver-pause-342805            kube-system
	896a2a1f7e815       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   45035934e89a1       kube-scheduler-pause-342805            kube-system
	eaaa85998fc71       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   f7af2ca5fa18a       etcd-pause-342805                      kube-system
	8c463a387ca63       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   db66b4884e3c6       kube-controller-manager-pause-342805   kube-system
	
	
	==> coredns [a8a81ebb8ff2ee4206f048c06e5ef7ee9a247f0e196f127cc9c4d11a54e3ec70] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60678 - 33845 "HINFO IN 8882800103220148662.2007934130076834967. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019975446s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [aa126941d61c5ed3a2539d94ca024fc2b211af29a29435b3b5fe3c14e86f425e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60620 - 48403 "HINFO IN 8765493537311838405.7210490724235879187. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036251201s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-342805
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-342805
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=pause-342805
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_38_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:38:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-342805
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:39:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:38:58 +0000   Thu, 02 Oct 2025 21:38:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:38:58 +0000   Thu, 02 Oct 2025 21:38:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:38:58 +0000   Thu, 02 Oct 2025 21:38:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:38:58 +0000   Thu, 02 Oct 2025 21:38:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-342805
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 de142cc9fb0445e5a73a6c6cca2db4b3
	  System UUID:                2dc81684-4caa-4340-a064-df5b7bf8ba40
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-wklz5                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     66s
	  kube-system                 etcd-pause-342805                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         71s
	  kube-system                 kindnet-9p45q                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      66s
	  kube-system                 kube-apiserver-pause-342805             250m (12%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-controller-manager-pause-342805    200m (10%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-proxy-b8p7f                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-scheduler-pause-342805             100m (5%)     0 (0%)      0 (0%)           0 (0%)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 65s   kube-proxy       
	  Normal   Starting                 5s    kube-proxy       
	  Normal   Starting                 71s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s   kubelet          Node pause-342805 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s   kubelet          Node pause-342805 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s   kubelet          Node pause-342805 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           67s   node-controller  Node pause-342805 event: Registered Node pause-342805 in Controller
	  Normal   NodeReady                25s   kubelet          Node pause-342805 status is now: NodeReady
	  Normal   RegisteredNode           3s    node-controller  Node pause-342805 event: Registered Node pause-342805 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:06] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:07] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:08] overlayfs: idmapped layers are currently not supported
	[  +3.176407] overlayfs: idmapped layers are currently not supported
	[ +43.828152] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:09] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:15] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:18] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:21] overlayfs: idmapped layers are currently not supported
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [49b680a7feedf7ae93e61038594b7f0cfc7491ece78d19de67bc3d8f1b4223bb] <==
	{"level":"warn","ts":"2025-10-02T21:39:15.215361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.241279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.256816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.275949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.298586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.321342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.328582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.366745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.370091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.426437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.460999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.498592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.517412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.553367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.579761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.630108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.653045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.684474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.704079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.730712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.762749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.795189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.811452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.838830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.922741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42136","server-name":"","error":"EOF"}
	
	
	==> etcd [eaaa85998fc718f91f20fe8359d69931b92d16f630120efe6ae6ac056fe5db99] <==
	{"level":"warn","ts":"2025-10-02T21:38:08.786231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:38:08.803015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:38:08.825796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:38:08.873787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:38:08.884379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:38:08.898902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:38:08.968794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48382","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T21:39:03.138533Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T21:39:03.138596Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-342805","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-02T21:39:03.138682Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T21:39:03.288850Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T21:39:03.288937Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:39:03.288958Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-02T21:39:03.289002Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-02T21:39:03.289079Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T21:39:03.289082Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T21:39:03.289098Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T21:39:03.289104Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T21:39:03.289139Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T21:39:03.289147Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T21:39:03.289154Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:39:03.292266Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-02T21:39:03.292355Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:39:03.292395Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-02T21:39:03.292403Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-342805","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 21:39:23 up  6:21,  0 user,  load average: 2.57, 2.40, 1.97
	Linux pause-342805 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2eebf60bbf5500fe23ec8cd8315dfbc1950838a47c8aece540057eda0ca29225] <==
	I1002 21:39:11.637743       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:39:11.638802       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 21:39:11.638938       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:39:11.638949       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:39:11.638964       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:39:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:39:11.819617       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:39:11.819648       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:39:11.819659       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:39:11.821452       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:39:16.680882       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 21:39:16.680992       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 21:39:16.681143       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:39:16.681182       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 21:39:18.120280       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:39:18.120316       1 metrics.go:72] Registering metrics
	I1002 21:39:18.120425       1 controller.go:711] "Syncing nftables rules"
	I1002 21:39:21.818104       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:39:21.818158       1 main.go:301] handling current node
	
	
	==> kindnet [3824cad09c2135ada2cfdfd835293a21e06eda6cee6ecac3ed7f1d5fc47fee58] <==
	I1002 21:38:18.321266       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:38:18.321920       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 21:38:18.325485       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:38:18.325593       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:38:18.325631       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:38:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:38:18.517676       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:38:18.517766       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:38:18.517798       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:38:18.517981       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:38:48.518331       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 21:38:48.518346       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:38:48.518443       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 21:38:48.518514       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1002 21:38:50.118294       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:38:50.118412       1 metrics.go:72] Registering metrics
	I1002 21:38:50.118560       1 controller.go:711] "Syncing nftables rules"
	I1002 21:38:58.517834       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:38:58.517889       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a21ccda7e71dd9290384fa4b3a2407b6db8727bdbf1eab65844a9f3072edf373] <==
	W1002 21:39:03.153141       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.153169       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.153191       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.155770       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.155807       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.155965       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.156130       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.156163       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.156382       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.156416       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.156446       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.156477       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160508       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160539       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160564       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160589       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160614       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160639       1 logging.go:55] [core] [Channel #19 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160662       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160686       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160713       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160864       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160893       1 logging.go:55] [core] [Channel #26 SubChannel #28]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.161248       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.163527       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b6f87782ca2214b01449f681d7d0c7871eca77122ed9cb74cefc6cb15c5bb08b] <==
	I1002 21:39:16.636692       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I1002 21:39:16.636942       1 controller.go:119] Starting legacy_token_tracking_controller
	I1002 21:39:16.710338       1 shared_informer.go:349] "Waiting for caches to sync" controller="configmaps"
	I1002 21:39:16.636974       1 local_available_controller.go:156] Starting LocalAvailability controller
	I1002 21:39:16.710722       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I1002 21:39:16.797028       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 21:39:16.798866       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 21:39:16.814276       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 21:39:16.814422       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:39:16.828241       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 21:39:16.828548       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1002 21:39:16.894670       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 21:39:16.905659       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:39:16.905895       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 21:39:16.905966       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 21:39:16.907561       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 21:39:16.911844       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 21:39:16.915119       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 21:39:16.915198       1 policy_source.go:240] refreshing policies
	I1002 21:39:16.942215       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 21:39:16.952583       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:39:16.954411       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 21:39:16.968536       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:39:17.637484       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:39:18.896790       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-controller-manager [8c463a387ca63c6e0efb5d14b494efbc842f4b2e2c92baadbfe325c4ae902de0] <==
	I1002 21:38:16.654581       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 21:38:16.654623       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 21:38:16.654667       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 21:38:16.663768       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-342805" podCIDRs=["10.244.0.0/24"]
	I1002 21:38:16.663815       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 21:38:16.671990       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:38:16.680534       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 21:38:16.680540       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 21:38:16.681652       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 21:38:16.681657       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:38:16.682923       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:38:16.683013       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 21:38:16.683187       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 21:38:16.683303       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 21:38:16.683343       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 21:38:16.683408       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 21:38:16.683487       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-342805"
	I1002 21:38:16.683528       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 21:38:16.686122       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 21:38:16.687378       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 21:38:16.688591       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 21:38:16.689872       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 21:38:16.689937       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 21:38:16.693359       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 21:39:01.688962       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [fb276e882bc35cb5818102f56d74380625fe699c8e09cd27bc7b7261535c718a] <==
	I1002 21:39:20.544469       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 21:39:20.551157       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:39:20.551227       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 21:39:20.566485       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:39:20.590841       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:39:20.590959       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 21:39:20.591046       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 21:39:20.593945       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 21:39:20.594004       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:39:20.594136       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 21:39:20.596508       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 21:39:20.597715       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 21:39:20.608645       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 21:39:20.608740       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 21:39:20.608766       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 21:39:20.608789       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 21:39:20.621066       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:39:20.621098       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:39:20.621108       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:39:20.638200       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 21:39:20.638327       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 21:39:20.638410       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-342805"
	I1002 21:39:20.638463       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 21:39:20.670319       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:39:20.690636       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [a0a1466e037f111cf8a0823a7a5735c20064ba3a688f3838fc67549f7fca96bf] <==
	I1002 21:38:18.235974       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:38:18.372466       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:38:18.473178       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:38:18.473303       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 21:38:18.473437       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:38:18.506077       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:38:18.506205       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:38:18.519685       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:38:18.520102       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:38:18.524978       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:38:18.526404       1 config.go:200] "Starting service config controller"
	I1002 21:38:18.526464       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:38:18.526505       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:38:18.526532       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:38:18.526584       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:38:18.526610       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:38:18.527310       1 config.go:309] "Starting node config controller"
	I1002 21:38:18.528551       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:38:18.528599       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:38:18.627500       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:38:18.627616       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:38:18.627642       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f51f346240e74219bcbf28ffade50a4f610755e03459803502dccfa987f4c32b] <==
	I1002 21:39:12.137448       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:39:13.149223       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1002 21:39:16.887879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-342805\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1002 21:39:18.474531       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:39:18.474659       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 21:39:18.474773       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:39:18.498525       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:39:18.498586       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:39:18.502451       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:39:18.502745       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:39:18.502768       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:39:18.504113       1 config.go:200] "Starting service config controller"
	I1002 21:39:18.504138       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:39:18.504167       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:39:18.504183       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:39:18.504194       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:39:18.504198       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:39:18.505028       1 config.go:309] "Starting node config controller"
	I1002 21:39:18.505048       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:39:18.505055       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:39:18.604291       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:39:18.604328       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:39:18.604354       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [710a4b876c5a6f18185d01b4227f6a3f3959c6204416ea776c9687a2eacfb588] <==
	I1002 21:39:17.178716       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:39:18.539463       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:39:18.539566       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:39:18.544403       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 21:39:18.544539       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 21:39:18.544656       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:39:18.544715       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:39:18.544774       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:39:18.544811       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:39:18.544909       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:39:18.544990       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:39:18.645495       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:39:18.645618       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 21:39:18.645755       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [896a2a1f7e8154e713d114640eedbc8a8ae21d748779df0ef9a01c71f6348695] <==
	E1002 21:38:09.740015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 21:38:09.740109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 21:38:09.740151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 21:38:09.740168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 21:38:10.559450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 21:38:10.684432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 21:38:10.728197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 21:38:10.756768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 21:38:10.763418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 21:38:10.769485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 21:38:10.807322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 21:38:10.816014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 21:38:10.820540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 21:38:10.861031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 21:38:10.879010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 21:38:10.921643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 21:38:10.926743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 21:38:11.175469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 21:38:13.985682       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:39:03.137226       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 21:39:03.137270       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:39:03.137236       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 21:39:03.137318       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 21:39:03.137322       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 21:39:03.137337       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 21:39:11 pause-342805 kubelet[1304]: E1002 21:39:11.543576    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-342805\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e80884df27b724a2121d49199cbc5758" pod="kube-system/kube-controller-manager-pause-342805"
	Oct 02 21:39:11 pause-342805 kubelet[1304]: E1002 21:39:11.544150    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8p7f\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="198ef9a9-14fd-48d1-a6a2-e318eaa0436e" pod="kube-system/kube-proxy-b8p7f"
	Oct 02 21:39:11 pause-342805 kubelet[1304]: E1002 21:39:11.544981    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9p45q\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="57fa9c10-a34c-4e2c-8201-e61aedf6b127" pod="kube-system/kindnet-9p45q"
	Oct 02 21:39:11 pause-342805 kubelet[1304]: E1002 21:39:11.545350    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-wklz5\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="703d8048-2e98-4139-bb02-6bf0333a3a18" pod="kube-system/coredns-66bc5c9577-wklz5"
	Oct 02 21:39:11 pause-342805 kubelet[1304]: E1002 21:39:11.545677    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-342805\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="278d87b72416104012ce1c09ad4fa897" pod="kube-system/kube-scheduler-pause-342805"
	Oct 02 21:39:11 pause-342805 kubelet[1304]: E1002 21:39:11.546022    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-342805\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="16d2c7133107f2d0bcf56be7d313f152" pod="kube-system/etcd-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.652936    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-9p45q\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="57fa9c10-a34c-4e2c-8201-e61aedf6b127" pod="kube-system/kindnet-9p45q"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.653909    1304 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-342805\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.654070    1304 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-342805\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.658533    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-wklz5\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="703d8048-2e98-4139-bb02-6bf0333a3a18" pod="kube-system/coredns-66bc5c9577-wklz5"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.661466    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="278d87b72416104012ce1c09ad4fa897" pod="kube-system/kube-scheduler-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.665591    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="16d2c7133107f2d0bcf56be7d313f152" pod="kube-system/etcd-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.668687    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="7c6b81ffc8e764b4f2a95b59b0ff4299" pod="kube-system/kube-apiserver-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.670510    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="e80884df27b724a2121d49199cbc5758" pod="kube-system/kube-controller-manager-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.671963    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-b8p7f\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="198ef9a9-14fd-48d1-a6a2-e318eaa0436e" pod="kube-system/kube-proxy-b8p7f"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.673489    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="278d87b72416104012ce1c09ad4fa897" pod="kube-system/kube-scheduler-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.675300    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="16d2c7133107f2d0bcf56be7d313f152" pod="kube-system/etcd-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.676809    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="7c6b81ffc8e764b4f2a95b59b0ff4299" pod="kube-system/kube-apiserver-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.678701    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="e80884df27b724a2121d49199cbc5758" pod="kube-system/kube-controller-manager-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.680961    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-b8p7f\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="198ef9a9-14fd-48d1-a6a2-e318eaa0436e" pod="kube-system/kube-proxy-b8p7f"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.682178    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-9p45q\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="57fa9c10-a34c-4e2c-8201-e61aedf6b127" pod="kube-system/kindnet-9p45q"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.683191    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-wklz5\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="703d8048-2e98-4139-bb02-6bf0333a3a18" pod="kube-system/coredns-66bc5c9577-wklz5"
	Oct 02 21:39:20 pause-342805 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 21:39:20 pause-342805 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 21:39:20 pause-342805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-342805 -n pause-342805
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-342805 -n pause-342805: exit status 2 (470.302252ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-342805 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-342805
helpers_test.go:243: (dbg) docker inspect pause-342805:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "530b4c7e0490615479203ddfef0f069329f6f4019ae6db34e6338cf6a940ad62",
	        "Created": "2025-10-02T21:37:45.475135327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1154611,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:37:45.539913308Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/530b4c7e0490615479203ddfef0f069329f6f4019ae6db34e6338cf6a940ad62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/530b4c7e0490615479203ddfef0f069329f6f4019ae6db34e6338cf6a940ad62/hostname",
	        "HostsPath": "/var/lib/docker/containers/530b4c7e0490615479203ddfef0f069329f6f4019ae6db34e6338cf6a940ad62/hosts",
	        "LogPath": "/var/lib/docker/containers/530b4c7e0490615479203ddfef0f069329f6f4019ae6db34e6338cf6a940ad62/530b4c7e0490615479203ddfef0f069329f6f4019ae6db34e6338cf6a940ad62-json.log",
	        "Name": "/pause-342805",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-342805:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-342805",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "530b4c7e0490615479203ddfef0f069329f6f4019ae6db34e6338cf6a940ad62",
	                "LowerDir": "/var/lib/docker/overlay2/82010b3fc00cb2e0297506621d71ee1fbe5ea97de7fdcb4b8582c2487c6ace5a-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/82010b3fc00cb2e0297506621d71ee1fbe5ea97de7fdcb4b8582c2487c6ace5a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/82010b3fc00cb2e0297506621d71ee1fbe5ea97de7fdcb4b8582c2487c6ace5a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/82010b3fc00cb2e0297506621d71ee1fbe5ea97de7fdcb4b8582c2487c6ace5a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-342805",
	                "Source": "/var/lib/docker/volumes/pause-342805/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-342805",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-342805",
	                "name.minikube.sigs.k8s.io": "pause-342805",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "acc0608eff5b2cd2c1b69653ec74f2fd34f6f6a6cc12cb0d93789c581113fa25",
	            "SandboxKey": "/var/run/docker/netns/acc0608eff5b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34156"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34157"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34160"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34158"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34159"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-342805": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:fb:4c:f8:a0:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "81494d96e03b293c61c1cdd8f7783b34a1a7a1b0c4b109a75e93b51a5ab06b80",
	                    "EndpointID": "52f413570282fbeb723b63b15fdc77cf0c9740028c46017e3692df79124d777f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-342805",
	                        "530b4c7e0490"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-342805 -n pause-342805
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-342805 -n pause-342805: exit status 2 (334.312308ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-342805 logs -n 25
E1002 21:39:25.746941  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-342805 logs -n 25: (1.339279752s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-222907 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:33 UTC │ 02 Oct 25 21:34 UTC │
	│ start   │ -p missing-upgrade-192196 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-192196    │ jenkins │ v1.32.0 │ 02 Oct 25 21:33 UTC │ 02 Oct 25 21:34 UTC │
	│ start   │ -p NoKubernetes-222907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │ 02 Oct 25 21:34 UTC │
	│ delete  │ -p NoKubernetes-222907                                                                                                                   │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │ 02 Oct 25 21:34 UTC │
	│ start   │ -p NoKubernetes-222907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │ 02 Oct 25 21:34 UTC │
	│ start   │ -p missing-upgrade-192196 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-192196    │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │ 02 Oct 25 21:35 UTC │
	│ ssh     │ -p NoKubernetes-222907 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │                     │
	│ stop    │ -p NoKubernetes-222907                                                                                                                   │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │ 02 Oct 25 21:34 UTC │
	│ start   │ -p NoKubernetes-222907 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │ 02 Oct 25 21:35 UTC │
	│ ssh     │ -p NoKubernetes-222907 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:35 UTC │                     │
	│ delete  │ -p NoKubernetes-222907                                                                                                                   │ NoKubernetes-222907       │ jenkins │ v1.37.0 │ 02 Oct 25 21:35 UTC │ 02 Oct 25 21:35 UTC │
	│ start   │ -p kubernetes-upgrade-840583 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-840583 │ jenkins │ v1.37.0 │ 02 Oct 25 21:35 UTC │ 02 Oct 25 21:35 UTC │
	│ delete  │ -p missing-upgrade-192196                                                                                                                │ missing-upgrade-192196    │ jenkins │ v1.37.0 │ 02 Oct 25 21:35 UTC │ 02 Oct 25 21:35 UTC │
	│ start   │ -p stopped-upgrade-678661 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-678661    │ jenkins │ v1.32.0 │ 02 Oct 25 21:35 UTC │ 02 Oct 25 21:36 UTC │
	│ stop    │ -p kubernetes-upgrade-840583                                                                                                             │ kubernetes-upgrade-840583 │ jenkins │ v1.37.0 │ 02 Oct 25 21:35 UTC │ 02 Oct 25 21:35 UTC │
	│ start   │ -p kubernetes-upgrade-840583 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-840583 │ jenkins │ v1.37.0 │ 02 Oct 25 21:35 UTC │                     │
	│ stop    │ stopped-upgrade-678661 stop                                                                                                              │ stopped-upgrade-678661    │ jenkins │ v1.32.0 │ 02 Oct 25 21:36 UTC │ 02 Oct 25 21:36 UTC │
	│ start   │ -p stopped-upgrade-678661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-678661    │ jenkins │ v1.37.0 │ 02 Oct 25 21:36 UTC │ 02 Oct 25 21:36 UTC │
	│ delete  │ -p stopped-upgrade-678661                                                                                                                │ stopped-upgrade-678661    │ jenkins │ v1.37.0 │ 02 Oct 25 21:36 UTC │ 02 Oct 25 21:36 UTC │
	│ start   │ -p running-upgrade-497263 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-497263    │ jenkins │ v1.32.0 │ 02 Oct 25 21:36 UTC │ 02 Oct 25 21:37 UTC │
	│ start   │ -p running-upgrade-497263 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-497263    │ jenkins │ v1.37.0 │ 02 Oct 25 21:37 UTC │ 02 Oct 25 21:37 UTC │
	│ delete  │ -p running-upgrade-497263                                                                                                                │ running-upgrade-497263    │ jenkins │ v1.37.0 │ 02 Oct 25 21:37 UTC │ 02 Oct 25 21:37 UTC │
	│ start   │ -p pause-342805 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-342805              │ jenkins │ v1.37.0 │ 02 Oct 25 21:37 UTC │ 02 Oct 25 21:39 UTC │
	│ start   │ -p pause-342805 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-342805              │ jenkins │ v1.37.0 │ 02 Oct 25 21:39 UTC │ 02 Oct 25 21:39 UTC │
	│ pause   │ -p pause-342805 --alsologtostderr -v=5                                                                                                   │ pause-342805              │ jenkins │ v1.37.0 │ 02 Oct 25 21:39 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:39:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:39:01.620590 1158612 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:39:01.622726 1158612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:39:01.622783 1158612 out.go:374] Setting ErrFile to fd 2...
	I1002 21:39:01.622804 1158612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:39:01.623263 1158612 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:39:01.623897 1158612 out.go:368] Setting JSON to false
	I1002 21:39:01.625062 1158612 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":22879,"bootTime":1759418263,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:39:01.629831 1158612 start.go:140] virtualization:  
	I1002 21:39:01.634483 1158612 out.go:179] * [pause-342805] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:39:01.638467 1158612 notify.go:221] Checking for updates...
	I1002 21:39:01.639874 1158612 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:39:01.643032 1158612 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:39:01.646696 1158612 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:39:01.649949 1158612 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:39:01.652987 1158612 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:39:01.656399 1158612 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:39:01.659984 1158612 config.go:182] Loaded profile config "pause-342805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:39:01.660890 1158612 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:39:01.689159 1158612 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:39:01.689295 1158612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:39:01.804453 1158612 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 21:39:01.794012015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:39:01.804566 1158612 docker.go:319] overlay module found
	I1002 21:39:01.807769 1158612 out.go:179] * Using the docker driver based on existing profile
	I1002 21:39:01.810645 1158612 start.go:306] selected driver: docker
	I1002 21:39:01.810667 1158612 start.go:936] validating driver "docker" against &{Name:pause-342805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-342805 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:39:01.810799 1158612 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:39:01.810898 1158612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:39:01.870092 1158612 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 21:39:01.861252268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:39:01.870508 1158612 cni.go:84] Creating CNI manager for ""
	I1002 21:39:01.870574 1158612 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:39:01.870622 1158612 start.go:350] cluster config:
	{Name:pause-342805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-342805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:39:01.875493 1158612 out.go:179] * Starting "pause-342805" primary control-plane node in "pause-342805" cluster
	I1002 21:39:01.878163 1158612 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:39:01.881023 1158612 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:39:01.883673 1158612 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:39:01.883732 1158612 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:39:01.883748 1158612 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:39:01.883758 1158612 cache.go:59] Caching tarball of preloaded images
	I1002 21:39:01.883838 1158612 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:39:01.883890 1158612 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:39:01.884025 1158612 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/config.json ...
	I1002 21:39:01.906379 1158612 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:39:01.906404 1158612 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:39:01.906416 1158612 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:39:01.906437 1158612 start.go:361] acquireMachinesLock for pause-342805: {Name:mk9a324cf34d97a2bca3e9378b685e5bb3f5cda9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:39:01.906493 1158612 start.go:365] duration metric: took 34.838µs to acquireMachinesLock for "pause-342805"
	I1002 21:39:01.906523 1158612 start.go:97] Skipping create...Using existing machine configuration
	I1002 21:39:01.906532 1158612 fix.go:55] fixHost starting: 
	I1002 21:39:01.906775 1158612 cli_runner.go:164] Run: docker container inspect pause-342805 --format={{.State.Status}}
	I1002 21:39:01.924170 1158612 fix.go:113] recreateIfNeeded on pause-342805: state=Running err=<nil>
	W1002 21:39:01.924200 1158612 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 21:39:01.231891 1144154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 21:39:01.231945 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:39:01.232022 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:39:01.267606 1144154 cri.go:89] found id: "ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46"
	I1002 21:39:01.267631 1144154 cri.go:89] found id: "6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2"
	I1002 21:39:01.267636 1144154 cri.go:89] found id: ""
	I1002 21:39:01.267644 1144154 logs.go:282] 2 containers: [ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2]
	I1002 21:39:01.267708 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:01.272762 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:01.276865 1144154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:39:01.276948 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:39:01.310485 1144154 cri.go:89] found id: ""
	I1002 21:39:01.310514 1144154 logs.go:282] 0 containers: []
	W1002 21:39:01.310524 1144154 logs.go:284] No container was found matching "etcd"
	I1002 21:39:01.310530 1144154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:39:01.310594 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:39:01.350146 1144154 cri.go:89] found id: ""
	I1002 21:39:01.350171 1144154 logs.go:282] 0 containers: []
	W1002 21:39:01.350182 1144154 logs.go:284] No container was found matching "coredns"
	I1002 21:39:01.350188 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:39:01.350257 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:39:01.384024 1144154 cri.go:89] found id: "5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe"
	I1002 21:39:01.384069 1144154 cri.go:89] found id: ""
	I1002 21:39:01.384093 1144154 logs.go:282] 1 containers: [5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe]
	I1002 21:39:01.384149 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:01.389157 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:39:01.389319 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:39:01.422635 1144154 cri.go:89] found id: ""
	I1002 21:39:01.422663 1144154 logs.go:282] 0 containers: []
	W1002 21:39:01.422679 1144154 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:39:01.422689 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:39:01.422798 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:39:01.478749 1144154 cri.go:89] found id: "3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35"
	I1002 21:39:01.478769 1144154 cri.go:89] found id: ""
	I1002 21:39:01.478777 1144154 logs.go:282] 1 containers: [3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35]
	I1002 21:39:01.478894 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:01.483034 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:39:01.483115 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:39:01.518718 1144154 cri.go:89] found id: ""
	I1002 21:39:01.518740 1144154 logs.go:282] 0 containers: []
	W1002 21:39:01.518749 1144154 logs.go:284] No container was found matching "kindnet"
	I1002 21:39:01.518756 1144154 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 21:39:01.518813 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 21:39:01.571185 1144154 cri.go:89] found id: ""
	I1002 21:39:01.571209 1144154 logs.go:282] 0 containers: []
	W1002 21:39:01.571217 1144154 logs.go:284] No container was found matching "storage-provisioner"
	I1002 21:39:01.571230 1144154 logs.go:123] Gathering logs for kube-controller-manager [3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35] ...
	I1002 21:39:01.571242 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35"
	I1002 21:39:01.612118 1144154 logs.go:123] Gathering logs for container status ...
	I1002 21:39:01.612149 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:39:01.667849 1144154 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:39:01.667874 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 21:39:01.927443 1158612 out.go:252] * Updating the running docker "pause-342805" container ...
	I1002 21:39:01.927478 1158612 machine.go:93] provisionDockerMachine start ...
	I1002 21:39:01.927573 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:01.944408 1158612 main.go:141] libmachine: Using SSH client type: native
	I1002 21:39:01.944732 1158612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34156 <nil> <nil>}
	I1002 21:39:01.944752 1158612 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:39:02.077819 1158612 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-342805
	
	I1002 21:39:02.077852 1158612 ubuntu.go:182] provisioning hostname "pause-342805"
	I1002 21:39:02.077918 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:02.096997 1158612 main.go:141] libmachine: Using SSH client type: native
	I1002 21:39:02.097320 1158612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34156 <nil> <nil>}
	I1002 21:39:02.097336 1158612 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-342805 && echo "pause-342805" | sudo tee /etc/hostname
	I1002 21:39:02.244143 1158612 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-342805
	
	I1002 21:39:02.244223 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:02.264010 1158612 main.go:141] libmachine: Using SSH client type: native
	I1002 21:39:02.264386 1158612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34156 <nil> <nil>}
	I1002 21:39:02.264408 1158612 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-342805' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-342805/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-342805' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:39:02.406532 1158612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:39:02.406564 1158612 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:39:02.406586 1158612 ubuntu.go:190] setting up certificates
	I1002 21:39:02.406595 1158612 provision.go:84] configureAuth start
	I1002 21:39:02.406654 1158612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-342805
	I1002 21:39:02.424230 1158612 provision.go:143] copyHostCerts
	I1002 21:39:02.424301 1158612 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:39:02.424316 1158612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:39:02.424390 1158612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:39:02.424494 1158612 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:39:02.424499 1158612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:39:02.424526 1158612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:39:02.424585 1158612 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:39:02.424589 1158612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:39:02.424612 1158612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:39:02.424668 1158612 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.pause-342805 san=[127.0.0.1 192.168.85.2 localhost minikube pause-342805]
	I1002 21:39:02.787585 1158612 provision.go:177] copyRemoteCerts
	I1002 21:39:02.787653 1158612 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:39:02.787706 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:02.807816 1158612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34156 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/pause-342805/id_rsa Username:docker}
	I1002 21:39:02.905578 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:39:02.923251 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 21:39:02.942150 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:39:02.958817 1158612 provision.go:87] duration metric: took 552.209025ms to configureAuth
	I1002 21:39:02.958842 1158612 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:39:02.959069 1158612 config.go:182] Loaded profile config "pause-342805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:39:02.959173 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:02.981897 1158612 main.go:141] libmachine: Using SSH client type: native
	I1002 21:39:02.982311 1158612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34156 <nil> <nil>}
	I1002 21:39:02.982335 1158612 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:39:08.300579 1158612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:39:08.300616 1158612 machine.go:96] duration metric: took 6.373123033s to provisionDockerMachine
	I1002 21:39:08.300628 1158612 start.go:294] postStartSetup for "pause-342805" (driver="docker")
	I1002 21:39:08.300639 1158612 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:39:08.300746 1158612 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:39:08.300798 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:08.320458 1158612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34156 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/pause-342805/id_rsa Username:docker}
	I1002 21:39:08.418817 1158612 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:39:08.422477 1158612 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:39:08.422507 1158612 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:39:08.422519 1158612 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:39:08.422592 1158612 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:39:08.422728 1158612 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:39:08.422854 1158612 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:39:08.430404 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:39:08.448141 1158612 start.go:297] duration metric: took 147.496603ms for postStartSetup
	I1002 21:39:08.448223 1158612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:39:08.448266 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:08.465698 1158612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34156 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/pause-342805/id_rsa Username:docker}
	I1002 21:39:08.559660 1158612 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:39:08.565213 1158612 fix.go:57] duration metric: took 6.658670812s for fixHost
	I1002 21:39:08.565236 1158612 start.go:84] releasing machines lock for "pause-342805", held for 6.658730478s
	I1002 21:39:08.565324 1158612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-342805
	I1002 21:39:08.582893 1158612 ssh_runner.go:195] Run: cat /version.json
	I1002 21:39:08.582944 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:08.582956 1158612 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:39:08.583016 1158612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342805
	I1002 21:39:08.607279 1158612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34156 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/pause-342805/id_rsa Username:docker}
	I1002 21:39:08.610017 1158612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34156 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/pause-342805/id_rsa Username:docker}
	I1002 21:39:08.701733 1158612 ssh_runner.go:195] Run: systemctl --version
	I1002 21:39:08.793935 1158612 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:39:08.834447 1158612 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:39:08.840046 1158612 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:39:08.840129 1158612 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:39:08.848323 1158612 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:39:08.848348 1158612 start.go:496] detecting cgroup driver to use...
	I1002 21:39:08.848410 1158612 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:39:08.848473 1158612 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:39:08.863485 1158612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:39:08.876582 1158612 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:39:08.876673 1158612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:39:08.892467 1158612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:39:08.906271 1158612 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:39:09.043784 1158612 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:39:09.167348 1158612 docker.go:234] disabling docker service ...
	I1002 21:39:09.167456 1158612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:39:09.183050 1158612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:39:09.196898 1158612 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:39:09.338825 1158612 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:39:09.475783 1158612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:39:09.489689 1158612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:39:09.504098 1158612 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:39:09.504251 1158612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:09.513501 1158612 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:39:09.513624 1158612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:09.523834 1158612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:09.533734 1158612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:09.543829 1158612 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:39:09.552400 1158612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:09.561393 1158612 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:09.569689 1158612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:39:09.578387 1158612 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:39:09.585952 1158612 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:39:09.593318 1158612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:39:09.722456 1158612 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:39:09.891619 1158612 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:39:09.891690 1158612 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:39:09.895724 1158612 start.go:564] Will wait 60s for crictl version
	I1002 21:39:09.895828 1158612 ssh_runner.go:195] Run: which crictl
	I1002 21:39:09.899234 1158612 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:39:09.927571 1158612 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:39:09.927702 1158612 ssh_runner.go:195] Run: crio --version
	I1002 21:39:09.956794 1158612 ssh_runner.go:195] Run: crio --version
	I1002 21:39:09.990430 1158612 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:39:09.993406 1158612 cli_runner.go:164] Run: docker network inspect pause-342805 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:39:10.015273 1158612 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 21:39:10.020073 1158612 kubeadm.go:883] updating cluster {Name:pause-342805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-342805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:39:10.020246 1158612 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:39:10.020309 1158612 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:39:10.055993 1158612 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:39:10.056022 1158612 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:39:10.056084 1158612 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:39:10.083881 1158612 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:39:10.083911 1158612 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:39:10.083920 1158612 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 21:39:10.084045 1158612 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-342805 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-342805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:39:10.084140 1158612 ssh_runner.go:195] Run: crio config
	I1002 21:39:10.151260 1158612 cni.go:84] Creating CNI manager for ""
	I1002 21:39:10.151285 1158612 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:39:10.151302 1158612 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:39:10.151356 1158612 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-342805 NodeName:pause-342805 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:39:10.151547 1158612 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-342805"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:39:10.151640 1158612 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:39:10.159884 1158612 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:39:10.159995 1158612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:39:10.167939 1158612 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1002 21:39:10.181366 1158612 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:39:10.194948 1158612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1002 21:39:10.208224 1158612 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:39:10.212464 1158612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:39:10.340506 1158612 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:39:10.354479 1158612 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805 for IP: 192.168.85.2
	I1002 21:39:10.354498 1158612 certs.go:195] generating shared ca certs ...
	I1002 21:39:10.354513 1158612 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:39:10.354700 1158612 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:39:10.354767 1158612 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:39:10.354782 1158612 certs.go:257] generating profile certs ...
	I1002 21:39:10.354889 1158612 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/client.key
	I1002 21:39:10.354957 1158612 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/apiserver.key.7baa9c76
	I1002 21:39:10.355020 1158612 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/proxy-client.key
	I1002 21:39:10.355165 1158612 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:39:10.355218 1158612 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:39:10.355234 1158612 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:39:10.355259 1158612 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:39:10.355311 1158612 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:39:10.355345 1158612 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:39:10.355416 1158612 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:39:10.356015 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:39:10.375784 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:39:10.393508 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:39:10.411551 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:39:10.428609 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 21:39:10.446006 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:39:10.463422 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:39:10.480577 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:39:10.497745 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:39:10.516902 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:39:10.533910 1158612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:39:10.550797 1158612 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:39:10.563142 1158612 ssh_runner.go:195] Run: openssl version
	I1002 21:39:10.569066 1158612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:39:10.577156 1158612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:39:10.580982 1158612 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:39:10.581099 1158612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:39:10.621963 1158612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:39:10.630168 1158612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:39:10.638585 1158612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:39:10.642404 1158612 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:39:10.642480 1158612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:39:10.685104 1158612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:39:10.693264 1158612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:39:10.701672 1158612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:39:10.705485 1158612 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:39:10.705559 1158612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:39:10.748667 1158612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:39:10.761305 1158612 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:39:10.765452 1158612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:39:10.806831 1158612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:39:10.847562 1158612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:39:10.888664 1158612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:39:10.929745 1158612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:39:10.971075 1158612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:39:11.012518 1158612 kubeadm.go:400] StartCluster: {Name:pause-342805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-342805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:39:11.012639 1158612 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:39:11.012707 1158612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:39:11.042927 1158612 cri.go:89] found id: "aa126941d61c5ed3a2539d94ca024fc2b211af29a29435b3b5fe3c14e86f425e"
	I1002 21:39:11.042949 1158612 cri.go:89] found id: "3824cad09c2135ada2cfdfd835293a21e06eda6cee6ecac3ed7f1d5fc47fee58"
	I1002 21:39:11.042955 1158612 cri.go:89] found id: "a0a1466e037f111cf8a0823a7a5735c20064ba3a688f3838fc67549f7fca96bf"
	I1002 21:39:11.042958 1158612 cri.go:89] found id: "a21ccda7e71dd9290384fa4b3a2407b6db8727bdbf1eab65844a9f3072edf373"
	I1002 21:39:11.042961 1158612 cri.go:89] found id: "896a2a1f7e8154e713d114640eedbc8a8ae21d748779df0ef9a01c71f6348695"
	I1002 21:39:11.042964 1158612 cri.go:89] found id: "eaaa85998fc718f91f20fe8359d69931b92d16f630120efe6ae6ac056fe5db99"
	I1002 21:39:11.042967 1158612 cri.go:89] found id: "8c463a387ca63c6e0efb5d14b494efbc842f4b2e2c92baadbfe325c4ae902de0"
	I1002 21:39:11.042992 1158612 cri.go:89] found id: ""
	I1002 21:39:11.043052 1158612 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 21:39:11.054141 1158612 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:39:11Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:39:11.054234 1158612 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:39:11.063006 1158612 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:39:11.063027 1158612 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:39:11.063095 1158612 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:39:11.077841 1158612 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:39:11.078563 1158612 kubeconfig.go:125] found "pause-342805" server: "https://192.168.85.2:8443"
	I1002 21:39:11.079456 1158612 kapi.go:59] client config for pause-342805: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/client.key", CAFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:39:11.079965 1158612 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 21:39:11.079985 1158612 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 21:39:11.079991 1158612 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 21:39:11.079996 1158612 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 21:39:11.080002 1158612 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 21:39:11.080264 1158612 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:39:11.088772 1158612 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 21:39:11.088811 1158612 kubeadm.go:601] duration metric: took 25.774852ms to restartPrimaryControlPlane
	I1002 21:39:11.088852 1158612 kubeadm.go:402] duration metric: took 76.312277ms to StartCluster
	I1002 21:39:11.088879 1158612 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:39:11.088968 1158612 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:39:11.089930 1158612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:39:11.090242 1158612 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:39:11.090566 1158612 config.go:182] Loaded profile config "pause-342805": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:39:11.090623 1158612 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:39:11.095086 1158612 out.go:179] * Verifying Kubernetes components...
	I1002 21:39:11.095083 1158612 out.go:179] * Enabled addons: 
	I1002 21:39:11.097923 1158612 addons.go:514] duration metric: took 7.288104ms for enable addons: enabled=[]
	I1002 21:39:11.098019 1158612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:39:11.238139 1158612 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:39:11.252553 1158612 node_ready.go:35] waiting up to 6m0s for node "pause-342805" to be "Ready" ...
	I1002 21:39:11.785798 1144154 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.117899668s)
	W1002 21:39:11.785837 1144154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1002 21:39:11.785845 1144154 logs.go:123] Gathering logs for kube-apiserver [6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2] ...
	I1002 21:39:11.785855 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2"
	I1002 21:39:11.849845 1144154 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:39:11.849920 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:39:11.949533 1144154 logs.go:123] Gathering logs for kubelet ...
	I1002 21:39:11.949628 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:39:12.094450 1144154 logs.go:123] Gathering logs for dmesg ...
	I1002 21:39:12.094543 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 21:39:12.116808 1144154 logs.go:123] Gathering logs for kube-apiserver [ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46] ...
	I1002 21:39:12.116833 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46"
	I1002 21:39:12.177911 1144154 logs.go:123] Gathering logs for kube-scheduler [5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe] ...
	I1002 21:39:12.177991 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe"
	I1002 21:39:14.768347 1144154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:39:16.771216 1144154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:55690->192.168.76.2:8443: read: connection reset by peer
	I1002 21:39:16.771262 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:39:16.771317 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:39:16.820783 1144154 cri.go:89] found id: "ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46"
	I1002 21:39:16.820801 1144154 cri.go:89] found id: "6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2"
	I1002 21:39:16.820805 1144154 cri.go:89] found id: ""
	I1002 21:39:16.820813 1144154 logs.go:282] 2 containers: [ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2]
	I1002 21:39:16.820870 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:16.824554 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:16.828164 1144154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:39:16.828225 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:39:16.868549 1144154 cri.go:89] found id: ""
	I1002 21:39:16.868567 1144154 logs.go:282] 0 containers: []
	W1002 21:39:16.868574 1144154 logs.go:284] No container was found matching "etcd"
	I1002 21:39:16.868580 1144154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:39:16.868720 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:39:16.908615 1144154 cri.go:89] found id: ""
	I1002 21:39:16.908635 1144154 logs.go:282] 0 containers: []
	W1002 21:39:16.908643 1144154 logs.go:284] No container was found matching "coredns"
	I1002 21:39:16.908659 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:39:16.908747 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:39:16.944423 1144154 cri.go:89] found id: "5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe"
	I1002 21:39:16.944446 1144154 cri.go:89] found id: ""
	I1002 21:39:16.944455 1144154 logs.go:282] 1 containers: [5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe]
	I1002 21:39:16.944519 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:16.953255 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:39:16.953337 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:39:17.010579 1144154 cri.go:89] found id: ""
	I1002 21:39:17.010601 1144154 logs.go:282] 0 containers: []
	W1002 21:39:17.010609 1144154 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:39:17.010615 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:39:17.010680 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:39:17.048370 1144154 cri.go:89] found id: "3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35"
	I1002 21:39:17.048450 1144154 cri.go:89] found id: ""
	I1002 21:39:17.048473 1144154 logs.go:282] 1 containers: [3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35]
	I1002 21:39:17.048578 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:17.053561 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:39:17.053629 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:39:17.099327 1144154 cri.go:89] found id: ""
	I1002 21:39:17.099402 1144154 logs.go:282] 0 containers: []
	W1002 21:39:17.099437 1144154 logs.go:284] No container was found matching "kindnet"
	I1002 21:39:17.099463 1144154 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 21:39:17.099553 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 21:39:17.135380 1144154 cri.go:89] found id: ""
	I1002 21:39:17.135401 1144154 logs.go:282] 0 containers: []
	W1002 21:39:17.135409 1144154 logs.go:284] No container was found matching "storage-provisioner"
	I1002 21:39:17.135421 1144154 logs.go:123] Gathering logs for kube-scheduler [5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe] ...
	I1002 21:39:17.135433 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe"
	I1002 21:39:17.201552 1144154 logs.go:123] Gathering logs for kube-controller-manager [3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35] ...
	I1002 21:39:17.201651 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35"
	I1002 21:39:17.237788 1144154 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:39:17.237865 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:39:17.316034 1144154 logs.go:123] Gathering logs for container status ...
	I1002 21:39:17.316067 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:39:17.363209 1144154 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:39:17.363235 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:39:17.473408 1144154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:39:17.473424 1144154 logs.go:123] Gathering logs for kube-apiserver [ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46] ...
	I1002 21:39:17.473438 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46"
	I1002 21:39:17.528901 1144154 logs.go:123] Gathering logs for kube-apiserver [6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2] ...
	I1002 21:39:17.528979 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2"
	W1002 21:39:17.578863 1144154 logs.go:130] failed kube-apiserver [6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:39:17.575776    4012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2\": container with ID starting with 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2 not found: ID does not exist" containerID="6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2"
	time="2025-10-02T21:39:17Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2\": container with ID starting with 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1002 21:39:17.575776    4012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2\": container with ID starting with 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2 not found: ID does not exist" containerID="6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2"
	time="2025-10-02T21:39:17Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2\": container with ID starting with 6b95d4fa61e21bbfbde3964f8cfb0177132f6598b40d112a4a096412f2f251f2 not found: ID does not exist"
	
	** /stderr **
	I1002 21:39:17.578884 1144154 logs.go:123] Gathering logs for kubelet ...
	I1002 21:39:17.578896 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:39:17.714220 1144154 logs.go:123] Gathering logs for dmesg ...
	I1002 21:39:17.714330 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 21:39:16.745592 1158612 node_ready.go:49] node "pause-342805" is "Ready"
	I1002 21:39:16.745616 1158612 node_ready.go:38] duration metric: took 5.493031658s for node "pause-342805" to be "Ready" ...
	I1002 21:39:16.745629 1158612 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:39:16.745689 1158612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:39:16.769979 1158612 api_server.go:72] duration metric: took 5.679702609s to wait for apiserver process to appear ...
	I1002 21:39:16.770001 1158612 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:39:16.770019 1158612 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:39:16.868020 1158612 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1002 21:39:16.868093 1158612 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1002 21:39:17.270647 1158612 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:39:17.286735 1158612 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:39:17.286822 1158612 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:39:17.770330 1158612 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:39:17.779856 1158612 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:39:17.779950 1158612 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:39:18.270122 1158612 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:39:18.279000 1158612 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 21:39:18.280319 1158612 api_server.go:141] control plane version: v1.34.1
	I1002 21:39:18.280389 1158612 api_server.go:131] duration metric: took 1.510379757s to wait for apiserver health ...
	I1002 21:39:18.280414 1158612 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:39:18.284879 1158612 system_pods.go:59] 7 kube-system pods found
	I1002 21:39:18.284971 1158612 system_pods.go:61] "coredns-66bc5c9577-wklz5" [703d8048-2e98-4139-bb02-6bf0333a3a18] Running
	I1002 21:39:18.284993 1158612 system_pods.go:61] "etcd-pause-342805" [21b56e85-81aa-470d-8401-ec24205fe60f] Running
	I1002 21:39:18.285028 1158612 system_pods.go:61] "kindnet-9p45q" [57fa9c10-a34c-4e2c-8201-e61aedf6b127] Running
	I1002 21:39:18.285053 1158612 system_pods.go:61] "kube-apiserver-pause-342805" [5c62ca30-d895-4bdd-a2eb-337f5cbeacac] Running
	I1002 21:39:18.285081 1158612 system_pods.go:61] "kube-controller-manager-pause-342805" [f56c68bd-d65f-4837-88a6-327d4b5e47ee] Running
	I1002 21:39:18.285102 1158612 system_pods.go:61] "kube-proxy-b8p7f" [198ef9a9-14fd-48d1-a6a2-e318eaa0436e] Running
	I1002 21:39:18.285142 1158612 system_pods.go:61] "kube-scheduler-pause-342805" [6432c0a0-8ce6-4963-a75a-004c0f10732c] Running
	I1002 21:39:18.285170 1158612 system_pods.go:74] duration metric: took 4.735132ms to wait for pod list to return data ...
	I1002 21:39:18.285195 1158612 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:39:18.288619 1158612 default_sa.go:45] found service account: "default"
	I1002 21:39:18.288704 1158612 default_sa.go:55] duration metric: took 3.487979ms for default service account to be created ...
	I1002 21:39:18.288749 1158612 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:39:18.298418 1158612 system_pods.go:86] 7 kube-system pods found
	I1002 21:39:18.298447 1158612 system_pods.go:89] "coredns-66bc5c9577-wklz5" [703d8048-2e98-4139-bb02-6bf0333a3a18] Running
	I1002 21:39:18.298454 1158612 system_pods.go:89] "etcd-pause-342805" [21b56e85-81aa-470d-8401-ec24205fe60f] Running
	I1002 21:39:18.298459 1158612 system_pods.go:89] "kindnet-9p45q" [57fa9c10-a34c-4e2c-8201-e61aedf6b127] Running
	I1002 21:39:18.298463 1158612 system_pods.go:89] "kube-apiserver-pause-342805" [5c62ca30-d895-4bdd-a2eb-337f5cbeacac] Running
	I1002 21:39:18.298468 1158612 system_pods.go:89] "kube-controller-manager-pause-342805" [f56c68bd-d65f-4837-88a6-327d4b5e47ee] Running
	I1002 21:39:18.298471 1158612 system_pods.go:89] "kube-proxy-b8p7f" [198ef9a9-14fd-48d1-a6a2-e318eaa0436e] Running
	I1002 21:39:18.298475 1158612 system_pods.go:89] "kube-scheduler-pause-342805" [6432c0a0-8ce6-4963-a75a-004c0f10732c] Running
	I1002 21:39:18.298482 1158612 system_pods.go:126] duration metric: took 9.698722ms to wait for k8s-apps to be running ...
	I1002 21:39:18.298489 1158612 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:39:18.298547 1158612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:39:18.316945 1158612 system_svc.go:56] duration metric: took 18.431012ms WaitForService to wait for kubelet
	I1002 21:39:18.317022 1158612 kubeadm.go:586] duration metric: took 7.22674822s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:39:18.317083 1158612 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:39:18.321767 1158612 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:39:18.321847 1158612 node_conditions.go:123] node cpu capacity is 2
	I1002 21:39:18.321876 1158612 node_conditions.go:105] duration metric: took 4.77253ms to run NodePressure ...
	I1002 21:39:18.321902 1158612 start.go:242] waiting for startup goroutines ...
	I1002 21:39:18.321938 1158612 start.go:247] waiting for cluster config update ...
	I1002 21:39:18.321964 1158612 start.go:256] writing updated cluster config ...
	I1002 21:39:18.322409 1158612 ssh_runner.go:195] Run: rm -f paused
	I1002 21:39:18.326420 1158612 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:39:18.327061 1158612 kapi.go:59] client config for pause-342805: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/profiles/pause-342805/client.key", CAFile:"/home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120200), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:39:18.335813 1158612 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wklz5" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.347037 1158612 pod_ready.go:94] pod "coredns-66bc5c9577-wklz5" is "Ready"
	I1002 21:39:18.347064 1158612 pod_ready.go:86] duration metric: took 11.221617ms for pod "coredns-66bc5c9577-wklz5" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.349567 1158612 pod_ready.go:83] waiting for pod "etcd-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.354157 1158612 pod_ready.go:94] pod "etcd-pause-342805" is "Ready"
	I1002 21:39:18.354182 1158612 pod_ready.go:86] duration metric: took 4.592391ms for pod "etcd-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.356492 1158612 pod_ready.go:83] waiting for pod "kube-apiserver-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.360865 1158612 pod_ready.go:94] pod "kube-apiserver-pause-342805" is "Ready"
	I1002 21:39:18.360889 1158612 pod_ready.go:86] duration metric: took 4.373427ms for pod "kube-apiserver-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.363146 1158612 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.730777 1158612 pod_ready.go:94] pod "kube-controller-manager-pause-342805" is "Ready"
	I1002 21:39:18.730858 1158612 pod_ready.go:86] duration metric: took 367.68662ms for pod "kube-controller-manager-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:18.933183 1158612 pod_ready.go:83] waiting for pod "kube-proxy-b8p7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:19.330775 1158612 pod_ready.go:94] pod "kube-proxy-b8p7f" is "Ready"
	I1002 21:39:19.330841 1158612 pod_ready.go:86] duration metric: took 397.58717ms for pod "kube-proxy-b8p7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:19.530612 1158612 pod_ready.go:83] waiting for pod "kube-scheduler-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:19.934225 1158612 pod_ready.go:94] pod "kube-scheduler-pause-342805" is "Ready"
	I1002 21:39:19.934254 1158612 pod_ready.go:86] duration metric: took 403.617102ms for pod "kube-scheduler-pause-342805" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:39:19.934266 1158612 pod_ready.go:40] duration metric: took 1.607811964s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:39:20.015029 1158612 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:39:20.020711 1158612 out.go:179] * Done! kubectl is now configured to use "pause-342805" cluster and "default" namespace by default
	I1002 21:39:20.242129 1144154 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:39:20.242490 1144154 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 21:39:20.242533 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:39:20.242585 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:39:20.336106 1144154 cri.go:89] found id: "ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46"
	I1002 21:39:20.336126 1144154 cri.go:89] found id: ""
	I1002 21:39:20.336135 1144154 logs.go:282] 1 containers: [ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46]
	I1002 21:39:20.336192 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:20.346609 1144154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:39:20.346681 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:39:20.441217 1144154 cri.go:89] found id: ""
	I1002 21:39:20.441239 1144154 logs.go:282] 0 containers: []
	W1002 21:39:20.441247 1144154 logs.go:284] No container was found matching "etcd"
	I1002 21:39:20.441254 1144154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:39:20.441316 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:39:20.531746 1144154 cri.go:89] found id: ""
	I1002 21:39:20.531771 1144154 logs.go:282] 0 containers: []
	W1002 21:39:20.531780 1144154 logs.go:284] No container was found matching "coredns"
	I1002 21:39:20.531786 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:39:20.531845 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:39:20.612344 1144154 cri.go:89] found id: "5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe"
	I1002 21:39:20.612369 1144154 cri.go:89] found id: ""
	I1002 21:39:20.612378 1144154 logs.go:282] 1 containers: [5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe]
	I1002 21:39:20.612435 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:20.626704 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:39:20.626790 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:39:20.709468 1144154 cri.go:89] found id: ""
	I1002 21:39:20.709490 1144154 logs.go:282] 0 containers: []
	W1002 21:39:20.709498 1144154 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:39:20.709505 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:39:20.709570 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:39:20.798509 1144154 cri.go:89] found id: "14fcb69953580ae9ed51f4de6e567531cb1f1512b688be0911acde3a2e9e901c"
	I1002 21:39:20.798529 1144154 cri.go:89] found id: "3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35"
	I1002 21:39:20.798534 1144154 cri.go:89] found id: ""
	I1002 21:39:20.798542 1144154 logs.go:282] 2 containers: [14fcb69953580ae9ed51f4de6e567531cb1f1512b688be0911acde3a2e9e901c 3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35]
	I1002 21:39:20.798598 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:20.810523 1144154 ssh_runner.go:195] Run: which crictl
	I1002 21:39:20.818774 1144154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:39:20.818846 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:39:20.871185 1144154 cri.go:89] found id: ""
	I1002 21:39:20.871208 1144154 logs.go:282] 0 containers: []
	W1002 21:39:20.871216 1144154 logs.go:284] No container was found matching "kindnet"
	I1002 21:39:20.871223 1144154 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 21:39:20.871279 1144154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 21:39:20.934319 1144154 cri.go:89] found id: ""
	I1002 21:39:20.934342 1144154 logs.go:282] 0 containers: []
	W1002 21:39:20.934350 1144154 logs.go:284] No container was found matching "storage-provisioner"
	I1002 21:39:20.934366 1144154 logs.go:123] Gathering logs for kube-controller-manager [14fcb69953580ae9ed51f4de6e567531cb1f1512b688be0911acde3a2e9e901c] ...
	I1002 21:39:20.934378 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 14fcb69953580ae9ed51f4de6e567531cb1f1512b688be0911acde3a2e9e901c"
	I1002 21:39:20.987139 1144154 logs.go:123] Gathering logs for kube-controller-manager [3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35] ...
	I1002 21:39:20.987171 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3579b279b5f03f7f038a83645808a30d3b562d5312bac958cc773ee590ff9b35"
	I1002 21:39:21.040878 1144154 logs.go:123] Gathering logs for container status ...
	I1002 21:39:21.040905 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:39:21.076991 1144154 logs.go:123] Gathering logs for kubelet ...
	I1002 21:39:21.077020 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:39:21.197808 1144154 logs.go:123] Gathering logs for dmesg ...
	I1002 21:39:21.197846 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 21:39:21.215251 1144154 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:39:21.215279 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:39:21.299858 1144154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:39:21.299883 1144154 logs.go:123] Gathering logs for kube-apiserver [ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46] ...
	I1002 21:39:21.299896 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca872b52662561b9fca544e4f0f8eaf805c378e319e09fabcf9c76bece9b9b46"
	I1002 21:39:21.342194 1144154 logs.go:123] Gathering logs for kube-scheduler [5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe] ...
	I1002 21:39:21.342226 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5776337fdfa3026e47281cc2288c7826a5b8856c8e999fb0ec5d4138f755e4fe"
	I1002 21:39:21.446248 1144154 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:39:21.446286 1144154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.570785749Z" level=info msg="Started container" PID=2344 containerID=710a4b876c5a6f18185d01b4227f6a3f3959c6204416ea776c9687a2eacfb588 description=kube-system/kube-scheduler-pause-342805/kube-scheduler id=ea9d1743-4a10-4174-bb60-2c744cca61f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45035934e89a1607d3cd02f00958f38075d853c9c173a4af011c3dda048d92d9
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.589007791Z" level=info msg="Started container" PID=2342 containerID=a8a81ebb8ff2ee4206f048c06e5ef7ee9a247f0e196f127cc9c4d11a54e3ec70 description=kube-system/coredns-66bc5c9577-wklz5/coredns id=1229f707-e32b-46a7-afd8-7eb746dfcba4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8286aa13ff27d8de4ab63985d86645ec45ad8b82fe93de47ab432b58663349f8
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.591103586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.591623407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.608534034Z" level=info msg="Created container b6f87782ca2214b01449f681d7d0c7871eca77122ed9cb74cefc6cb15c5bb08b: kube-system/kube-apiserver-pause-342805/kube-apiserver" id=16bf2bd4-2c44-4ec1-9fd4-3cfc64caf035 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.61136064Z" level=info msg="Starting container: b6f87782ca2214b01449f681d7d0c7871eca77122ed9cb74cefc6cb15c5bb08b" id=cc20f0eb-faab-491e-a82c-a29d7ff61011 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.613649177Z" level=info msg="Started container" PID=2345 containerID=b6f87782ca2214b01449f681d7d0c7871eca77122ed9cb74cefc6cb15c5bb08b description=kube-system/kube-apiserver-pause-342805/kube-apiserver id=cc20f0eb-faab-491e-a82c-a29d7ff61011 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f932a5c0c7c3bb788c662bb3e1cd9e97fba18bc2a07494c9c2c6fe3189dbd600
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.632861837Z" level=info msg="Created container fb276e882bc35cb5818102f56d74380625fe699c8e09cd27bc7b7261535c718a: kube-system/kube-controller-manager-pause-342805/kube-controller-manager" id=1cd23463-bfb4-4808-9cfb-09f65edc0707 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.633707311Z" level=info msg="Starting container: fb276e882bc35cb5818102f56d74380625fe699c8e09cd27bc7b7261535c718a" id=758d31e5-8610-4c4c-9b92-721471569171 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.637542546Z" level=info msg="Started container" PID=2369 containerID=fb276e882bc35cb5818102f56d74380625fe699c8e09cd27bc7b7261535c718a description=kube-system/kube-controller-manager-pause-342805/kube-controller-manager id=758d31e5-8610-4c4c-9b92-721471569171 name=/runtime.v1.RuntimeService/StartContainer sandboxID=db66b4884e3c6070df946e4e4e27808c7218ee4bbbdd861da808646c0711ef9a
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.648655728Z" level=info msg="Created container 49b680a7feedf7ae93e61038594b7f0cfc7491ece78d19de67bc3d8f1b4223bb: kube-system/etcd-pause-342805/etcd" id=4a5af565-4c5a-4393-a99c-729ebc63da5d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.649363343Z" level=info msg="Starting container: 49b680a7feedf7ae93e61038594b7f0cfc7491ece78d19de67bc3d8f1b4223bb" id=2f83805b-82bc-47fc-b12c-a39091b4f5cd name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:39:11 pause-342805 crio[2054]: time="2025-10-02T21:39:11.651492351Z" level=info msg="Started container" PID=2373 containerID=49b680a7feedf7ae93e61038594b7f0cfc7491ece78d19de67bc3d8f1b4223bb description=kube-system/etcd-pause-342805/etcd id=2f83805b-82bc-47fc-b12c-a39091b4f5cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7af2ca5fa18a3e4aa13ab24679a64931f680c4888010e31cf7f087c33ef5fe8
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.818726524Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.823391209Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.82353436Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.823611782Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.827585975Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.827616374Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.827632537Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.831435872Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.831471391Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.831495767Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.835567525Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:39:21 pause-342805 crio[2054]: time="2025-10-02T21:39:21.835715492Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	49b680a7feedf       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago       Running             etcd                      1                   f7af2ca5fa18a       etcd-pause-342805                      kube-system
	fb276e882bc35       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago       Running             kube-controller-manager   1                   db66b4884e3c6       kube-controller-manager-pause-342805   kube-system
	b6f87782ca221       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago       Running             kube-apiserver            1                   f932a5c0c7c3b       kube-apiserver-pause-342805            kube-system
	710a4b876c5a6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago       Running             kube-scheduler            1                   45035934e89a1       kube-scheduler-pause-342805            kube-system
	a8a81ebb8ff2e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   14 seconds ago       Running             coredns                   1                   8286aa13ff27d       coredns-66bc5c9577-wklz5               kube-system
	f51f346240e74       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   14 seconds ago       Running             kube-proxy                1                   c22cc56af246c       kube-proxy-b8p7f                       kube-system
	2eebf60bbf550       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   14 seconds ago       Running             kindnet-cni               1                   f0951a53464f2       kindnet-9p45q                          kube-system
	aa126941d61c5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   26 seconds ago       Exited              coredns                   0                   8286aa13ff27d       coredns-66bc5c9577-wklz5               kube-system
	3824cad09c213       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   f0951a53464f2       kindnet-9p45q                          kube-system
	a0a1466e037f1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   c22cc56af246c       kube-proxy-b8p7f                       kube-system
	a21ccda7e71dd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   f932a5c0c7c3b       kube-apiserver-pause-342805            kube-system
	896a2a1f7e815       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   45035934e89a1       kube-scheduler-pause-342805            kube-system
	eaaa85998fc71       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   f7af2ca5fa18a       etcd-pause-342805                      kube-system
	8c463a387ca63       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   db66b4884e3c6       kube-controller-manager-pause-342805   kube-system
	
	
	==> coredns [a8a81ebb8ff2ee4206f048c06e5ef7ee9a247f0e196f127cc9c4d11a54e3ec70] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60678 - 33845 "HINFO IN 8882800103220148662.2007934130076834967. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019975446s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [aa126941d61c5ed3a2539d94ca024fc2b211af29a29435b3b5fe3c14e86f425e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60620 - 48403 "HINFO IN 8765493537311838405.7210490724235879187. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036251201s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-342805
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-342805
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=pause-342805
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_38_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:38:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-342805
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:39:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:38:58 +0000   Thu, 02 Oct 2025 21:38:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:38:58 +0000   Thu, 02 Oct 2025 21:38:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:38:58 +0000   Thu, 02 Oct 2025 21:38:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:38:58 +0000   Thu, 02 Oct 2025 21:38:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-342805
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 de142cc9fb0445e5a73a6c6cca2db4b3
	  System UUID:                2dc81684-4caa-4340-a064-df5b7bf8ba40
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-wklz5                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     69s
	  kube-system                 etcd-pause-342805                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         74s
	  kube-system                 kindnet-9p45q                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      69s
	  kube-system                 kube-apiserver-pause-342805             250m (12%)    0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-controller-manager-pause-342805    200m (10%)    0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-proxy-b8p7f                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-scheduler-pause-342805             100m (5%)     0 (0%)      0 (0%)           0 (0%)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 67s   kube-proxy       
	  Normal   Starting                 7s    kube-proxy       
	  Normal   Starting                 74s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 74s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  74s   kubelet          Node pause-342805 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    74s   kubelet          Node pause-342805 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s   kubelet          Node pause-342805 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           70s   node-controller  Node pause-342805 event: Registered Node pause-342805 in Controller
	  Normal   NodeReady                28s   kubelet          Node pause-342805 status is now: NodeReady
	  Normal   RegisteredNode           6s    node-controller  Node pause-342805 event: Registered Node pause-342805 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:06] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:07] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:08] overlayfs: idmapped layers are currently not supported
	[  +3.176407] overlayfs: idmapped layers are currently not supported
	[ +43.828152] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:09] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:15] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:18] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:21] overlayfs: idmapped layers are currently not supported
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [49b680a7feedf7ae93e61038594b7f0cfc7491ece78d19de67bc3d8f1b4223bb] <==
	{"level":"warn","ts":"2025-10-02T21:39:15.215361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.241279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.256816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.275949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.298586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.321342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.328582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.366745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.370091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.426437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.460999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.498592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.517412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.553367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.579761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.630108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.653045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.684474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.704079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.730712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.762749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.795189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.811452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.838830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:39:15.922741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42136","server-name":"","error":"EOF"}
	
	
	==> etcd [eaaa85998fc718f91f20fe8359d69931b92d16f630120efe6ae6ac056fe5db99] <==
	{"level":"warn","ts":"2025-10-02T21:38:08.786231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:38:08.803015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:38:08.825796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:38:08.873787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:38:08.884379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:38:08.898902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:38:08.968794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48382","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T21:39:03.138533Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T21:39:03.138596Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-342805","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-02T21:39:03.138682Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T21:39:03.288850Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T21:39:03.288937Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:39:03.288958Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-02T21:39:03.289002Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-02T21:39:03.289079Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T21:39:03.289082Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T21:39:03.289098Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T21:39:03.289104Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T21:39:03.289139Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T21:39:03.289147Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T21:39:03.289154Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:39:03.292266Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-02T21:39:03.292355Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:39:03.292395Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-02T21:39:03.292403Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-342805","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 21:39:26 up  6:21,  0 user,  load average: 2.57, 2.40, 1.97
	Linux pause-342805 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2eebf60bbf5500fe23ec8cd8315dfbc1950838a47c8aece540057eda0ca29225] <==
	I1002 21:39:11.637743       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:39:11.638802       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 21:39:11.638938       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:39:11.638949       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:39:11.638964       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:39:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:39:11.819617       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:39:11.819648       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:39:11.819659       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:39:11.821452       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:39:16.680882       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 21:39:16.680992       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 21:39:16.681143       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:39:16.681182       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 21:39:18.120280       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:39:18.120316       1 metrics.go:72] Registering metrics
	I1002 21:39:18.120425       1 controller.go:711] "Syncing nftables rules"
	I1002 21:39:21.818104       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:39:21.818158       1 main.go:301] handling current node
	
	
	==> kindnet [3824cad09c2135ada2cfdfd835293a21e06eda6cee6ecac3ed7f1d5fc47fee58] <==
	I1002 21:38:18.321266       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:38:18.321920       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 21:38:18.325485       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:38:18.325593       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:38:18.325631       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:38:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:38:18.517676       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:38:18.517766       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:38:18.517798       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:38:18.517981       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:38:48.518331       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 21:38:48.518346       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:38:48.518443       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 21:38:48.518514       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1002 21:38:50.118294       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:38:50.118412       1 metrics.go:72] Registering metrics
	I1002 21:38:50.118560       1 controller.go:711] "Syncing nftables rules"
	I1002 21:38:58.517834       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:38:58.517889       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a21ccda7e71dd9290384fa4b3a2407b6db8727bdbf1eab65844a9f3072edf373] <==
	W1002 21:39:03.153141       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.153169       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.153191       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.155770       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.155807       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.155965       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.156130       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.156163       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.156382       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.156416       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.156446       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.156477       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160508       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160539       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160564       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160589       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160614       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160639       1 logging.go:55] [core] [Channel #19 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160662       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160686       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160713       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160864       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.160893       1 logging.go:55] [core] [Channel #26 SubChannel #28]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.161248       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:39:03.163527       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b6f87782ca2214b01449f681d7d0c7871eca77122ed9cb74cefc6cb15c5bb08b] <==
	I1002 21:39:16.636692       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I1002 21:39:16.636942       1 controller.go:119] Starting legacy_token_tracking_controller
	I1002 21:39:16.710338       1 shared_informer.go:349] "Waiting for caches to sync" controller="configmaps"
	I1002 21:39:16.636974       1 local_available_controller.go:156] Starting LocalAvailability controller
	I1002 21:39:16.710722       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I1002 21:39:16.797028       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 21:39:16.798866       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 21:39:16.814276       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 21:39:16.814422       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:39:16.828241       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 21:39:16.828548       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1002 21:39:16.894670       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 21:39:16.905659       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:39:16.905895       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 21:39:16.905966       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 21:39:16.907561       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 21:39:16.911844       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 21:39:16.915119       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 21:39:16.915198       1 policy_source.go:240] refreshing policies
	I1002 21:39:16.942215       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 21:39:16.952583       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:39:16.954411       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 21:39:16.968536       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:39:17.637484       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:39:18.896790       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-controller-manager [8c463a387ca63c6e0efb5d14b494efbc842f4b2e2c92baadbfe325c4ae902de0] <==
	I1002 21:38:16.654581       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 21:38:16.654623       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 21:38:16.654667       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 21:38:16.663768       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-342805" podCIDRs=["10.244.0.0/24"]
	I1002 21:38:16.663815       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 21:38:16.671990       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:38:16.680534       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 21:38:16.680540       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 21:38:16.681652       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 21:38:16.681657       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:38:16.682923       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:38:16.683013       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 21:38:16.683187       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 21:38:16.683303       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 21:38:16.683343       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 21:38:16.683408       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 21:38:16.683487       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-342805"
	I1002 21:38:16.683528       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 21:38:16.686122       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 21:38:16.687378       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 21:38:16.688591       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 21:38:16.689872       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 21:38:16.689937       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 21:38:16.693359       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 21:39:01.688962       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [fb276e882bc35cb5818102f56d74380625fe699c8e09cd27bc7b7261535c718a] <==
	I1002 21:39:20.544469       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 21:39:20.551157       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:39:20.551227       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 21:39:20.566485       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:39:20.590841       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:39:20.590959       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 21:39:20.591046       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 21:39:20.593945       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 21:39:20.594004       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:39:20.594136       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 21:39:20.596508       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 21:39:20.597715       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 21:39:20.608645       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 21:39:20.608740       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 21:39:20.608766       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 21:39:20.608789       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 21:39:20.621066       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:39:20.621098       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:39:20.621108       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:39:20.638200       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 21:39:20.638327       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 21:39:20.638410       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-342805"
	I1002 21:39:20.638463       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 21:39:20.670319       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:39:20.690636       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [a0a1466e037f111cf8a0823a7a5735c20064ba3a688f3838fc67549f7fca96bf] <==
	I1002 21:38:18.235974       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:38:18.372466       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:38:18.473178       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:38:18.473303       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 21:38:18.473437       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:38:18.506077       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:38:18.506205       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:38:18.519685       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:38:18.520102       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:38:18.524978       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:38:18.526404       1 config.go:200] "Starting service config controller"
	I1002 21:38:18.526464       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:38:18.526505       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:38:18.526532       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:38:18.526584       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:38:18.526610       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:38:18.527310       1 config.go:309] "Starting node config controller"
	I1002 21:38:18.528551       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:38:18.528599       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:38:18.627500       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:38:18.627616       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:38:18.627642       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f51f346240e74219bcbf28ffade50a4f610755e03459803502dccfa987f4c32b] <==
	I1002 21:39:12.137448       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:39:13.149223       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1002 21:39:16.887879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-342805\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1002 21:39:18.474531       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:39:18.474659       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 21:39:18.474773       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:39:18.498525       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:39:18.498586       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:39:18.502451       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:39:18.502745       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:39:18.502768       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:39:18.504113       1 config.go:200] "Starting service config controller"
	I1002 21:39:18.504138       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:39:18.504167       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:39:18.504183       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:39:18.504194       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:39:18.504198       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:39:18.505028       1 config.go:309] "Starting node config controller"
	I1002 21:39:18.505048       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:39:18.505055       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:39:18.604291       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:39:18.604328       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:39:18.604354       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [710a4b876c5a6f18185d01b4227f6a3f3959c6204416ea776c9687a2eacfb588] <==
	I1002 21:39:17.178716       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:39:18.539463       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:39:18.539566       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:39:18.544403       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 21:39:18.544539       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 21:39:18.544656       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:39:18.544715       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:39:18.544774       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:39:18.544811       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:39:18.544909       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:39:18.544990       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:39:18.645495       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:39:18.645618       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 21:39:18.645755       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [896a2a1f7e8154e713d114640eedbc8a8ae21d748779df0ef9a01c71f6348695] <==
	E1002 21:38:09.740015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 21:38:09.740109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 21:38:09.740151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 21:38:09.740168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 21:38:10.559450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 21:38:10.684432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 21:38:10.728197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 21:38:10.756768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 21:38:10.763418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 21:38:10.769485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 21:38:10.807322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 21:38:10.816014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 21:38:10.820540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 21:38:10.861031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 21:38:10.879010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 21:38:10.921643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 21:38:10.926743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 21:38:11.175469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 21:38:13.985682       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:39:03.137226       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 21:39:03.137270       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:39:03.137236       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 21:39:03.137318       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 21:39:03.137322       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 21:39:03.137337       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 21:39:11 pause-342805 kubelet[1304]: E1002 21:39:11.543576    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-342805\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e80884df27b724a2121d49199cbc5758" pod="kube-system/kube-controller-manager-pause-342805"
	Oct 02 21:39:11 pause-342805 kubelet[1304]: E1002 21:39:11.544150    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b8p7f\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="198ef9a9-14fd-48d1-a6a2-e318eaa0436e" pod="kube-system/kube-proxy-b8p7f"
	Oct 02 21:39:11 pause-342805 kubelet[1304]: E1002 21:39:11.544981    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9p45q\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="57fa9c10-a34c-4e2c-8201-e61aedf6b127" pod="kube-system/kindnet-9p45q"
	Oct 02 21:39:11 pause-342805 kubelet[1304]: E1002 21:39:11.545350    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-wklz5\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="703d8048-2e98-4139-bb02-6bf0333a3a18" pod="kube-system/coredns-66bc5c9577-wklz5"
	Oct 02 21:39:11 pause-342805 kubelet[1304]: E1002 21:39:11.545677    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-342805\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="278d87b72416104012ce1c09ad4fa897" pod="kube-system/kube-scheduler-pause-342805"
	Oct 02 21:39:11 pause-342805 kubelet[1304]: E1002 21:39:11.546022    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-342805\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="16d2c7133107f2d0bcf56be7d313f152" pod="kube-system/etcd-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.652936    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-9p45q\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="57fa9c10-a34c-4e2c-8201-e61aedf6b127" pod="kube-system/kindnet-9p45q"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.653909    1304 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-342805\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.654070    1304 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-342805\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.658533    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-wklz5\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="703d8048-2e98-4139-bb02-6bf0333a3a18" pod="kube-system/coredns-66bc5c9577-wklz5"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.661466    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="278d87b72416104012ce1c09ad4fa897" pod="kube-system/kube-scheduler-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.665591    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="16d2c7133107f2d0bcf56be7d313f152" pod="kube-system/etcd-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.668687    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="7c6b81ffc8e764b4f2a95b59b0ff4299" pod="kube-system/kube-apiserver-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.670510    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="e80884df27b724a2121d49199cbc5758" pod="kube-system/kube-controller-manager-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.671963    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-b8p7f\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="198ef9a9-14fd-48d1-a6a2-e318eaa0436e" pod="kube-system/kube-proxy-b8p7f"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.673489    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="278d87b72416104012ce1c09ad4fa897" pod="kube-system/kube-scheduler-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.675300    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="16d2c7133107f2d0bcf56be7d313f152" pod="kube-system/etcd-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.676809    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="7c6b81ffc8e764b4f2a95b59b0ff4299" pod="kube-system/kube-apiserver-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.678701    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-342805\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="e80884df27b724a2121d49199cbc5758" pod="kube-system/kube-controller-manager-pause-342805"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.680961    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-b8p7f\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="198ef9a9-14fd-48d1-a6a2-e318eaa0436e" pod="kube-system/kube-proxy-b8p7f"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.682178    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-9p45q\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="57fa9c10-a34c-4e2c-8201-e61aedf6b127" pod="kube-system/kindnet-9p45q"
	Oct 02 21:39:16 pause-342805 kubelet[1304]: E1002 21:39:16.683191    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-wklz5\" is forbidden: User \"system:node:pause-342805\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342805' and this object" podUID="703d8048-2e98-4139-bb02-6bf0333a3a18" pod="kube-system/coredns-66bc5c9577-wklz5"
	Oct 02 21:39:20 pause-342805 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 21:39:20 pause-342805 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 21:39:20 pause-342805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-342805 -n pause-342805
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-342805 -n pause-342805: exit status 2 (336.754844ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-342805 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-714101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-714101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (366.582024ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:51:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-714101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-714101 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-714101 describe deploy/metrics-server -n kube-system: exit status 1 (100.234917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-714101 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-714101
helpers_test.go:243: (dbg) docker inspect old-k8s-version-714101:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67",
	        "Created": "2025-10-02T21:50:34.734644622Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1178504,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:50:34.806996416Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/hostname",
	        "HostsPath": "/var/lib/docker/containers/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/hosts",
	        "LogPath": "/var/lib/docker/containers/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67-json.log",
	        "Name": "/old-k8s-version-714101",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-714101:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-714101",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67",
	                "LowerDir": "/var/lib/docker/overlay2/20a52751eb46976a24fc5abf4becbbbbb7c2efef8e12481f2765ae857910ffd7-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20a52751eb46976a24fc5abf4becbbbbb7c2efef8e12481f2765ae857910ffd7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20a52751eb46976a24fc5abf4becbbbbb7c2efef8e12481f2765ae857910ffd7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20a52751eb46976a24fc5abf4becbbbbb7c2efef8e12481f2765ae857910ffd7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-714101",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-714101/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-714101",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-714101",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-714101",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d423927066aadd0088428c4d1d0e9edd5488636873017cbf62e85f9a7bb6c079",
	            "SandboxKey": "/var/run/docker/netns/d423927066aa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34181"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34182"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34185"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34183"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34184"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-714101": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:a7:d5:aa:01:11",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ed3dcf8a95545fb7e9009343422d8cf7e7334b26a46fbfef0ce71c0f5ff11be4",
	                    "EndpointID": "38177dd4cdd78b46baec29e1299c4db623dfccb61cc39ce6156eac84955cea21",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-714101",
	                        "e7b0b66ac30c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-714101 -n old-k8s-version-714101
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-714101 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-714101 logs -n 25: (1.147382724s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-644857 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo containerd config dump                                                                                                                                                                                                  │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo crio config                                                                                                                                                                                                             │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ delete  │ -p cilium-644857                                                                                                                                                                                                                              │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │ 02 Oct 25 21:41 UTC │
	│ start   │ -p force-systemd-env-916563 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-916563  │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ force-systemd-flag-987043 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-987043 │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	│ delete  │ -p force-systemd-flag-987043                                                                                                                                                                                                                  │ force-systemd-flag-987043 │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	│ start   │ -p cert-expiration-955864 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-955864    │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	│ delete  │ -p force-systemd-env-916563                                                                                                                                                                                                                   │ force-systemd-env-916563  │ jenkins │ v1.37.0 │ 02 Oct 25 21:49 UTC │ 02 Oct 25 21:49 UTC │
	│ start   │ -p cert-options-769461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:49 UTC │ 02 Oct 25 21:50 UTC │
	│ ssh     │ cert-options-769461 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ ssh     │ -p cert-options-769461 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ delete  │ -p cert-options-769461                                                                                                                                                                                                                        │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ start   │ -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p cert-expiration-955864 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-955864    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-714101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:51:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:51:40.958965 1180887 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:51:40.959076 1180887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:51:40.959080 1180887 out.go:374] Setting ErrFile to fd 2...
	I1002 21:51:40.959083 1180887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:51:40.959324 1180887 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:51:40.959683 1180887 out.go:368] Setting JSON to false
	I1002 21:51:40.960639 1180887 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23638,"bootTime":1759418263,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:51:40.960698 1180887 start.go:140] virtualization:  
	I1002 21:51:40.964601 1180887 out.go:179] * [cert-expiration-955864] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:51:40.967649 1180887 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:51:40.967692 1180887 notify.go:221] Checking for updates...
	I1002 21:51:40.971089 1180887 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:51:40.974003 1180887 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:51:40.976786 1180887 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:51:40.979705 1180887 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:51:40.982505 1180887 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:51:40.985872 1180887 config.go:182] Loaded profile config "cert-expiration-955864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:51:40.986482 1180887 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:51:41.013732 1180887 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:51:41.013840 1180887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:51:41.094764 1180887 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 21:51:41.085307901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:51:41.094852 1180887 docker.go:319] overlay module found
	I1002 21:51:41.097865 1180887 out.go:179] * Using the docker driver based on existing profile
	I1002 21:51:41.100679 1180887 start.go:306] selected driver: docker
	I1002 21:51:41.100687 1180887 start.go:936] validating driver "docker" against &{Name:cert-expiration-955864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-955864 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:51:41.100785 1180887 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:51:41.101557 1180887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:51:41.157023 1180887 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 21:51:41.147907831 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:51:41.157320 1180887 cni.go:84] Creating CNI manager for ""
	I1002 21:51:41.157374 1180887 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:51:41.157410 1180887 start.go:350] cluster config:
	{Name:cert-expiration-955864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-955864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1002 21:51:41.160451 1180887 out.go:179] * Starting "cert-expiration-955864" primary control-plane node in "cert-expiration-955864" cluster
	I1002 21:51:41.163209 1180887 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:51:41.166131 1180887 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:51:41.169055 1180887 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:51:41.169102 1180887 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:51:41.169110 1180887 cache.go:59] Caching tarball of preloaded images
	I1002 21:51:41.169108 1180887 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:51:41.169188 1180887 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:51:41.169266 1180887 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:51:41.169373 1180887 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/cert-expiration-955864/config.json ...
	I1002 21:51:41.188672 1180887 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:51:41.188683 1180887 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:51:41.188709 1180887 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:51:41.188731 1180887 start.go:361] acquireMachinesLock for cert-expiration-955864: {Name:mk17ba83053c428e3e5a5b6dc8fe84c1b101dcdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:51:41.188796 1180887 start.go:365] duration metric: took 47.925µs to acquireMachinesLock for "cert-expiration-955864"
	I1002 21:51:41.188814 1180887 start.go:97] Skipping create...Using existing machine configuration
	I1002 21:51:41.188818 1180887 fix.go:55] fixHost starting: 
	I1002 21:51:41.189078 1180887 cli_runner.go:164] Run: docker container inspect cert-expiration-955864 --format={{.State.Status}}
	I1002 21:51:41.205988 1180887 fix.go:113] recreateIfNeeded on cert-expiration-955864: state=Running err=<nil>
	W1002 21:51:41.206005 1180887 fix.go:139] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 02 21:51:30 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:30.188086002Z" level=info msg="Created container aae244ed98ab7a04e65c1a9399796ba7c4d3cdb52d2b07adeb911acd3f4963d3: kube-system/coredns-5dd5756b68-f7qdk/coredns" id=cae1411f-9f80-4fce-9ea0-1406374cfaeb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:51:30 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:30.190619334Z" level=info msg="Starting container: aae244ed98ab7a04e65c1a9399796ba7c4d3cdb52d2b07adeb911acd3f4963d3" id=1f846d54-3142-4445-825e-12638ea293d8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:51:30 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:30.195530171Z" level=info msg="Started container" PID=1960 containerID=aae244ed98ab7a04e65c1a9399796ba7c4d3cdb52d2b07adeb911acd3f4963d3 description=kube-system/coredns-5dd5756b68-f7qdk/coredns id=1f846d54-3142-4445-825e-12638ea293d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f87d67468de7a581a13b1391fedd688c486c75fb32cc414c21cc90358d5ae0b5
	Oct 02 21:51:33 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:33.241350844Z" level=info msg="Running pod sandbox: default/busybox/POD" id=557cc04d-7db2-422e-a1be-277f9db2cdb5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:51:33 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:33.241428323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:51:33 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:33.246697919Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:39423378a3ef0e9f33667076d42887a9c43df81ec207f828ea8f78485918b398 UID:cd2e4885-9738-419b-a43f-b2503a5228c3 NetNS:/var/run/netns/3a1dc800-3037-4520-8be3-bb694cf9b147 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079cb0}] Aliases:map[]}"
	Oct 02 21:51:33 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:33.24673032Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 02 21:51:33 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:33.258633817Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:39423378a3ef0e9f33667076d42887a9c43df81ec207f828ea8f78485918b398 UID:cd2e4885-9738-419b-a43f-b2503a5228c3 NetNS:/var/run/netns/3a1dc800-3037-4520-8be3-bb694cf9b147 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079cb0}] Aliases:map[]}"
	Oct 02 21:51:33 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:33.25877705Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 02 21:51:33 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:33.263393371Z" level=info msg="Ran pod sandbox 39423378a3ef0e9f33667076d42887a9c43df81ec207f828ea8f78485918b398 with infra container: default/busybox/POD" id=557cc04d-7db2-422e-a1be-277f9db2cdb5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:51:33 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:33.265993582Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d7b5c113-bf1b-4e61-b752-135e2788c8c3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:51:33 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:33.266277226Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d7b5c113-bf1b-4e61-b752-135e2788c8c3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:51:33 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:33.266386474Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d7b5c113-bf1b-4e61-b752-135e2788c8c3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:51:33 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:33.267344479Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=478ff653-e271-4ff0-a3bf-da47205a4ca7 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:51:33 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:33.273517133Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 02 21:51:35 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:35.354804384Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=478ff653-e271-4ff0-a3bf-da47205a4ca7 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:51:35 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:35.356017259Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a92d8c83-3421-41a3-902d-c3fa7aad8d1c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:51:35 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:35.358998308Z" level=info msg="Creating container: default/busybox/busybox" id=68edcb96-77dd-4c2d-8143-ef7786121e5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:51:35 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:35.360638101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:51:35 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:35.365390648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:51:35 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:35.365949394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:51:35 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:35.394548828Z" level=info msg="Created container d02d2a144e0097dd9d9170cd9bf5a90aabc1d17de14a1bd51cedb48836cc95ad: default/busybox/busybox" id=68edcb96-77dd-4c2d-8143-ef7786121e5f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:51:35 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:35.397580567Z" level=info msg="Starting container: d02d2a144e0097dd9d9170cd9bf5a90aabc1d17de14a1bd51cedb48836cc95ad" id=5cea04a9-6c7d-4e17-88e2-bad46be9ab73 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:51:35 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:35.401945039Z" level=info msg="Started container" PID=2017 containerID=d02d2a144e0097dd9d9170cd9bf5a90aabc1d17de14a1bd51cedb48836cc95ad description=default/busybox/busybox id=5cea04a9-6c7d-4e17-88e2-bad46be9ab73 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39423378a3ef0e9f33667076d42887a9c43df81ec207f828ea8f78485918b398
	Oct 02 21:51:42 old-k8s-version-714101 crio[832]: time="2025-10-02T21:51:42.208533099Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	d02d2a144e009       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   39423378a3ef0       busybox                                          default
	aae244ed98ab7       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   f87d67468de7a       coredns-5dd5756b68-f7qdk                         kube-system
	c802bbd9ef83c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   b7fe03a697eed       storage-provisioner                              kube-system
	2358ced8d0c91       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   47e266356642f       kindnet-qgs2b                                    kube-system
	9773c9a1609a6       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   caf6881ce0c74       kube-proxy-9ktm4                                 kube-system
	f024a0b2c005d       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   4a82995f86a98       kube-apiserver-old-k8s-version-714101            kube-system
	c2a55d51dce94       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   fb99eaf625a9e       kube-scheduler-old-k8s-version-714101            kube-system
	0918938e6d354       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   574e9b4331d95       kube-controller-manager-old-k8s-version-714101   kube-system
	568e26e0748d6       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   77473cf236d57       etcd-old-k8s-version-714101                      kube-system
	
	
	==> coredns [aae244ed98ab7a04e65c1a9399796ba7c4d3cdb52d2b07adeb911acd3f4963d3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35935 - 29817 "HINFO IN 8294632732398227290.1024712628301424753. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01268339s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-714101
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-714101
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=old-k8s-version-714101
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_51_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:50:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-714101
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:51:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:51:34 +0000   Thu, 02 Oct 2025 21:50:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:51:34 +0000   Thu, 02 Oct 2025 21:50:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:51:34 +0000   Thu, 02 Oct 2025 21:50:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:51:34 +0000   Thu, 02 Oct 2025 21:51:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-714101
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4c6d7d691b94e7ca65303f55d6ada8b
	  System UUID:                fd388e5a-8f2f-4643-a470-d71d3d179fee
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-f7qdk                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-714101                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-qgs2b                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-714101             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-714101    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-9ktm4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-714101             100m (5%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 48s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node old-k8s-version-714101 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-714101 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-714101 event: Registered Node old-k8s-version-714101 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-714101 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 21:09] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:15] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:18] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:21] overlayfs: idmapped layers are currently not supported
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +39.734560] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [568e26e0748d68e9537a80bab277e612c222549351d8f778afe6e1df159dc30b] <==
	{"level":"info","ts":"2025-10-02T21:50:55.708798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-02T21:50:55.708966Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-02T21:50:55.710873Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-02T21:50:55.711012Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-02T21:50:55.711174Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-02T21:50:55.715823Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-02T21:50:55.715924Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-02T21:50:56.678082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-02T21:50:56.678206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-02T21:50:56.678247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-02T21:50:56.678296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-02T21:50:56.678328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-02T21:50:56.678379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-02T21:50:56.678413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-02T21:50:56.680164Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-714101 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-02T21:50:56.68024Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T21:50:56.681544Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-02T21:50:56.681673Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T21:50:56.682055Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T21:50:56.686072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-02T21:50:56.686791Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T21:50:56.686923Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T21:50:56.686984Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T21:50:56.687035Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-02T21:50:56.687076Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:51:44 up  6:34,  0 user,  load average: 1.59, 1.10, 1.31
	Linux old-k8s-version-714101 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2358ced8d0c91c295acc54c79d53b190b18d1c957611fef92d208b6058bee5b7] <==
	I1002 21:51:19.014996       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:51:19.015444       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 21:51:19.015632       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:51:19.015688       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:51:19.015726       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:51:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:51:19.307398       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:51:19.307423       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:51:19.307432       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:51:19.308534       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 21:51:19.508321       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:51:19.508434       1 metrics.go:72] Registering metrics
	I1002 21:51:19.508511       1 controller.go:711] "Syncing nftables rules"
	I1002 21:51:29.311814       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:51:29.311874       1 main.go:301] handling current node
	I1002 21:51:39.310153       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:51:39.310206       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f024a0b2c005ddc6911859dcbc22a4f7c792416959e9414db65b930a5f15bea9] <==
	I1002 21:50:59.907469       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 21:50:59.907510       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1002 21:50:59.907516       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1002 21:50:59.907869       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1002 21:50:59.907975       1 aggregator.go:166] initial CRD sync complete...
	I1002 21:50:59.908019       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 21:50:59.908051       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 21:50:59.908079       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:50:59.935510       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:51:00.625432       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 21:51:00.634502       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 21:51:00.634529       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:51:01.305999       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:51:01.370320       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:51:01.442397       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 21:51:01.449860       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1002 21:51:01.451158       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 21:51:01.457447       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:51:01.819014       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 21:51:03.277859       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 21:51:03.300374       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 21:51:03.320935       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1002 21:51:15.034793       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1002 21:51:15.474747       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1002 21:51:42.319657       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.76.2:44712->192.168.76.2:10250: write: broken pipe
	
	
	==> kube-controller-manager [0918938e6d3540c5bd71134714b4abd44dc834e1387a435c465b3cc174dfcfd1] <==
	I1002 21:51:14.808117       1 shared_informer.go:318] Caches are synced for endpoint
	I1002 21:51:14.853831       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 21:51:14.874968       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 21:51:15.041456       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1002 21:51:15.210730       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 21:51:15.212401       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 21:51:15.212427       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1002 21:51:15.488052       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9ktm4"
	I1002 21:51:15.494938       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-qgs2b"
	I1002 21:51:15.691802       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-2pmdr"
	I1002 21:51:15.736785       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-f7qdk"
	I1002 21:51:15.800797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="759.581547ms"
	I1002 21:51:15.891372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.51966ms"
	I1002 21:51:15.891514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.89µs"
	I1002 21:51:15.891610       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.3µs"
	I1002 21:51:17.076384       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1002 21:51:17.117755       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-2pmdr"
	I1002 21:51:17.135509       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.864455ms"
	I1002 21:51:17.161576       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.022131ms"
	I1002 21:51:17.161648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.394µs"
	I1002 21:51:29.785409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.148µs"
	I1002 21:51:29.823457       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.869µs"
	I1002 21:51:30.708314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.308302ms"
	I1002 21:51:30.708565       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.209µs"
	I1002 21:51:34.747008       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [9773c9a1609a685f5c9c71790af046bbbfa82a12a0c211f9bd31dc4a35734a41] <==
	I1002 21:51:16.306941       1 server_others.go:69] "Using iptables proxy"
	I1002 21:51:16.330638       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1002 21:51:16.416042       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:51:16.418962       1 server_others.go:152] "Using iptables Proxier"
	I1002 21:51:16.419000       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 21:51:16.419008       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 21:51:16.419044       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 21:51:16.419263       1 server.go:846] "Version info" version="v1.28.0"
	I1002 21:51:16.419273       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:51:16.420519       1 config.go:188] "Starting service config controller"
	I1002 21:51:16.420530       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 21:51:16.420549       1 config.go:97] "Starting endpoint slice config controller"
	I1002 21:51:16.420554       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 21:51:16.420873       1 config.go:315] "Starting node config controller"
	I1002 21:51:16.420879       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 21:51:16.521655       1 shared_informer.go:318] Caches are synced for node config
	I1002 21:51:16.521698       1 shared_informer.go:318] Caches are synced for service config
	I1002 21:51:16.521725       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c2a55d51dce9409c8ad7e5d317b877ef16d0660ccef2e15bfb068d4d80b2a3a9] <==
	W1002 21:50:59.861251       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 21:50:59.861268       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1002 21:50:59.861340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 21:50:59.861356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1002 21:50:59.861428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 21:50:59.861444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1002 21:50:59.870324       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 21:50:59.870362       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 21:51:00.698503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 21:51:00.698575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1002 21:51:00.722842       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 21:51:00.723034       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1002 21:51:00.772974       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 21:51:00.773113       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1002 21:51:00.905024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 21:51:00.905196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1002 21:51:00.929698       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 21:51:00.929736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1002 21:51:01.024711       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 21:51:01.024874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1002 21:51:01.043840       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 21:51:01.043965       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1002 21:51:01.050701       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 21:51:01.050806       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1002 21:51:03.329943       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 21:51:15 old-k8s-version-714101 kubelet[1402]: I1002 21:51:15.500576    1402 topology_manager.go:215] "Topology Admit Handler" podUID="902dc118-e33e-4d60-8711-8394ffefed71" podNamespace="kube-system" podName="kube-proxy-9ktm4"
	Oct 02 21:51:15 old-k8s-version-714101 kubelet[1402]: I1002 21:51:15.504545    1402 topology_manager.go:215] "Topology Admit Handler" podUID="4f2179e4-429f-4a72-886a-c6a3e321a396" podNamespace="kube-system" podName="kindnet-qgs2b"
	Oct 02 21:51:15 old-k8s-version-714101 kubelet[1402]: I1002 21:51:15.603263    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/902dc118-e33e-4d60-8711-8394ffefed71-kube-proxy\") pod \"kube-proxy-9ktm4\" (UID: \"902dc118-e33e-4d60-8711-8394ffefed71\") " pod="kube-system/kube-proxy-9ktm4"
	Oct 02 21:51:15 old-k8s-version-714101 kubelet[1402]: I1002 21:51:15.603311    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/902dc118-e33e-4d60-8711-8394ffefed71-lib-modules\") pod \"kube-proxy-9ktm4\" (UID: \"902dc118-e33e-4d60-8711-8394ffefed71\") " pod="kube-system/kube-proxy-9ktm4"
	Oct 02 21:51:15 old-k8s-version-714101 kubelet[1402]: I1002 21:51:15.603342    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f2179e4-429f-4a72-886a-c6a3e321a396-xtables-lock\") pod \"kindnet-qgs2b\" (UID: \"4f2179e4-429f-4a72-886a-c6a3e321a396\") " pod="kube-system/kindnet-qgs2b"
	Oct 02 21:51:15 old-k8s-version-714101 kubelet[1402]: I1002 21:51:15.603364    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f2179e4-429f-4a72-886a-c6a3e321a396-lib-modules\") pod \"kindnet-qgs2b\" (UID: \"4f2179e4-429f-4a72-886a-c6a3e321a396\") " pod="kube-system/kindnet-qgs2b"
	Oct 02 21:51:15 old-k8s-version-714101 kubelet[1402]: I1002 21:51:15.603389    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppklg\" (UniqueName: \"kubernetes.io/projected/4f2179e4-429f-4a72-886a-c6a3e321a396-kube-api-access-ppklg\") pod \"kindnet-qgs2b\" (UID: \"4f2179e4-429f-4a72-886a-c6a3e321a396\") " pod="kube-system/kindnet-qgs2b"
	Oct 02 21:51:15 old-k8s-version-714101 kubelet[1402]: I1002 21:51:15.603414    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwqrm\" (UniqueName: \"kubernetes.io/projected/902dc118-e33e-4d60-8711-8394ffefed71-kube-api-access-rwqrm\") pod \"kube-proxy-9ktm4\" (UID: \"902dc118-e33e-4d60-8711-8394ffefed71\") " pod="kube-system/kube-proxy-9ktm4"
	Oct 02 21:51:15 old-k8s-version-714101 kubelet[1402]: I1002 21:51:15.603442    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/902dc118-e33e-4d60-8711-8394ffefed71-xtables-lock\") pod \"kube-proxy-9ktm4\" (UID: \"902dc118-e33e-4d60-8711-8394ffefed71\") " pod="kube-system/kube-proxy-9ktm4"
	Oct 02 21:51:15 old-k8s-version-714101 kubelet[1402]: I1002 21:51:15.603466    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4f2179e4-429f-4a72-886a-c6a3e321a396-cni-cfg\") pod \"kindnet-qgs2b\" (UID: \"4f2179e4-429f-4a72-886a-c6a3e321a396\") " pod="kube-system/kindnet-qgs2b"
	Oct 02 21:51:16 old-k8s-version-714101 kubelet[1402]: W1002 21:51:16.122899    1402 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/crio-47e266356642f2a4d1856bd43c195a4ae1ab12a95f34711ff6a1af5329f6a79d WatchSource:0}: Error finding container 47e266356642f2a4d1856bd43c195a4ae1ab12a95f34711ff6a1af5329f6a79d: Status 404 returned error can't find the container with id 47e266356642f2a4d1856bd43c195a4ae1ab12a95f34711ff6a1af5329f6a79d
	Oct 02 21:51:16 old-k8s-version-714101 kubelet[1402]: I1002 21:51:16.622976    1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9ktm4" podStartSLOduration=1.6229075229999999 podCreationTimestamp="2025-10-02 21:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:51:16.614699641 +0000 UTC m=+13.379476017" watchObservedRunningTime="2025-10-02 21:51:16.622907523 +0000 UTC m=+13.387683899"
	Oct 02 21:51:19 old-k8s-version-714101 kubelet[1402]: I1002 21:51:19.622932    1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-qgs2b" podStartSLOduration=1.8303561799999999 podCreationTimestamp="2025-10-02 21:51:15 +0000 UTC" firstStartedPulling="2025-10-02 21:51:16.12679469 +0000 UTC m=+12.891571066" lastFinishedPulling="2025-10-02 21:51:18.919316128 +0000 UTC m=+15.684092512" observedRunningTime="2025-10-02 21:51:19.612091023 +0000 UTC m=+16.376867407" watchObservedRunningTime="2025-10-02 21:51:19.622877626 +0000 UTC m=+16.387654010"
	Oct 02 21:51:29 old-k8s-version-714101 kubelet[1402]: I1002 21:51:29.745276    1402 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 02 21:51:29 old-k8s-version-714101 kubelet[1402]: I1002 21:51:29.776571    1402 topology_manager.go:215] "Topology Admit Handler" podUID="84b9ee34-40ec-4d3f-9171-c7a8578abb2b" podNamespace="kube-system" podName="storage-provisioner"
	Oct 02 21:51:29 old-k8s-version-714101 kubelet[1402]: I1002 21:51:29.784259    1402 topology_manager.go:215] "Topology Admit Handler" podUID="848cb78b-98da-49f0-ab85-a772e528b803" podNamespace="kube-system" podName="coredns-5dd5756b68-f7qdk"
	Oct 02 21:51:29 old-k8s-version-714101 kubelet[1402]: I1002 21:51:29.803522    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/84b9ee34-40ec-4d3f-9171-c7a8578abb2b-tmp\") pod \"storage-provisioner\" (UID: \"84b9ee34-40ec-4d3f-9171-c7a8578abb2b\") " pod="kube-system/storage-provisioner"
	Oct 02 21:51:29 old-k8s-version-714101 kubelet[1402]: I1002 21:51:29.803589    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72r5t\" (UniqueName: \"kubernetes.io/projected/84b9ee34-40ec-4d3f-9171-c7a8578abb2b-kube-api-access-72r5t\") pod \"storage-provisioner\" (UID: \"84b9ee34-40ec-4d3f-9171-c7a8578abb2b\") " pod="kube-system/storage-provisioner"
	Oct 02 21:51:29 old-k8s-version-714101 kubelet[1402]: I1002 21:51:29.904044    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/848cb78b-98da-49f0-ab85-a772e528b803-config-volume\") pod \"coredns-5dd5756b68-f7qdk\" (UID: \"848cb78b-98da-49f0-ab85-a772e528b803\") " pod="kube-system/coredns-5dd5756b68-f7qdk"
	Oct 02 21:51:29 old-k8s-version-714101 kubelet[1402]: I1002 21:51:29.904282    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h42d4\" (UniqueName: \"kubernetes.io/projected/848cb78b-98da-49f0-ab85-a772e528b803-kube-api-access-h42d4\") pod \"coredns-5dd5756b68-f7qdk\" (UID: \"848cb78b-98da-49f0-ab85-a772e528b803\") " pod="kube-system/coredns-5dd5756b68-f7qdk"
	Oct 02 21:51:30 old-k8s-version-714101 kubelet[1402]: I1002 21:51:30.689738    1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.689693422 podCreationTimestamp="2025-10-02 21:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:51:30.657862501 +0000 UTC m=+27.422638893" watchObservedRunningTime="2025-10-02 21:51:30.689693422 +0000 UTC m=+27.454469798"
	Oct 02 21:51:32 old-k8s-version-714101 kubelet[1402]: I1002 21:51:32.939091    1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-f7qdk" podStartSLOduration=17.939050233 podCreationTimestamp="2025-10-02 21:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:51:30.692016717 +0000 UTC m=+27.456793101" watchObservedRunningTime="2025-10-02 21:51:32.939050233 +0000 UTC m=+29.703826617"
	Oct 02 21:51:32 old-k8s-version-714101 kubelet[1402]: I1002 21:51:32.939767    1402 topology_manager.go:215] "Topology Admit Handler" podUID="cd2e4885-9738-419b-a43f-b2503a5228c3" podNamespace="default" podName="busybox"
	Oct 02 21:51:33 old-k8s-version-714101 kubelet[1402]: I1002 21:51:33.029389    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp8jt\" (UniqueName: \"kubernetes.io/projected/cd2e4885-9738-419b-a43f-b2503a5228c3-kube-api-access-zp8jt\") pod \"busybox\" (UID: \"cd2e4885-9738-419b-a43f-b2503a5228c3\") " pod="default/busybox"
	Oct 02 21:51:33 old-k8s-version-714101 kubelet[1402]: W1002 21:51:33.263922    1402 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/crio-39423378a3ef0e9f33667076d42887a9c43df81ec207f828ea8f78485918b398 WatchSource:0}: Error finding container 39423378a3ef0e9f33667076d42887a9c43df81ec207f828ea8f78485918b398: Status 404 returned error can't find the container with id 39423378a3ef0e9f33667076d42887a9c43df81ec207f828ea8f78485918b398
	
	
	==> storage-provisioner [c802bbd9ef83c1871b57c27ea4dbaefb4a2698d97f3f204a4881b2e6148ec329] <==
	I1002 21:51:30.207606       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 21:51:30.231307       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 21:51:30.231454       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 21:51:30.245784       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:51:30.246018       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-714101_6efc2ff3-37a2-43f6-8df8-b77f5e4b69a8!
	I1002 21:51:30.247365       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89d3a326-7957-4e4b-8a32-0337fd7fbaa5", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-714101_6efc2ff3-37a2-43f6-8df8-b77f5e4b69a8 became leader
	I1002 21:51:30.346769       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-714101_6efc2ff3-37a2-43f6-8df8-b77f5e4b69a8!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-714101 -n old-k8s-version-714101
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-714101 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-714101 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-714101 --alsologtostderr -v=1: exit status 80 (2.383897199s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-714101 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:53:04.691212 1188335 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:53:04.691359 1188335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:53:04.691371 1188335 out.go:374] Setting ErrFile to fd 2...
	I1002 21:53:04.691376 1188335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:53:04.691692 1188335 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:53:04.692053 1188335 out.go:368] Setting JSON to false
	I1002 21:53:04.692079 1188335 mustload.go:65] Loading cluster: old-k8s-version-714101
	I1002 21:53:04.692537 1188335 config.go:182] Loaded profile config "old-k8s-version-714101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 21:53:04.693046 1188335 cli_runner.go:164] Run: docker container inspect old-k8s-version-714101 --format={{.State.Status}}
	I1002 21:53:04.716415 1188335 host.go:66] Checking if "old-k8s-version-714101" exists ...
	I1002 21:53:04.716730 1188335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:53:04.782415 1188335 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 21:53:04.77320425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:53:04.783102 1188335 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-714101 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 21:53:04.786446 1188335 out.go:179] * Pausing node old-k8s-version-714101 ... 
	I1002 21:53:04.789401 1188335 host.go:66] Checking if "old-k8s-version-714101" exists ...
	I1002 21:53:04.789731 1188335 ssh_runner.go:195] Run: systemctl --version
	I1002 21:53:04.789785 1188335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:53:04.808743 1188335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:53:04.904572 1188335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:53:04.923042 1188335 pause.go:51] kubelet running: true
	I1002 21:53:04.923111 1188335 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:53:05.205442 1188335 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:53:05.205532 1188335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:53:05.283173 1188335 cri.go:89] found id: "dd0b9cee0f7e3c89c78675db4423007b357320668c81cbc1fdc6e2b580b171de"
	I1002 21:53:05.283197 1188335 cri.go:89] found id: "5df3b4c4cd17f23a02364ab3315805996102ae0b9cc8eaa6c40c3633d6493a30"
	I1002 21:53:05.283203 1188335 cri.go:89] found id: "6ede589ba5dbe96da518cad779870f082e644a9d501eb5455c06ab59e4fb8bd8"
	I1002 21:53:05.283206 1188335 cri.go:89] found id: "b46c6d49eaea1a24cb1c3f40313daa77b388cfaf594a9dca66a287e6673de560"
	I1002 21:53:05.283210 1188335 cri.go:89] found id: "9af8e581376283afb448db698b74533910e6b9797b8fec4999d5b566a07a942a"
	I1002 21:53:05.283216 1188335 cri.go:89] found id: "5a83b3b5fdd1829410dd940329aebf65d78fdcbe04a70857ada30dd577e810fe"
	I1002 21:53:05.283220 1188335 cri.go:89] found id: "c3aedcafe119fc8bb499f43ec599913cb04664925ad686de580218aff808d473"
	I1002 21:53:05.283223 1188335 cri.go:89] found id: "3c684fbd5a7c318f98ee71b7292da5395690fd5e263ff9214c18a7732d8adeeb"
	I1002 21:53:05.283227 1188335 cri.go:89] found id: "d36efcb47e31c46319889a12e9d7ca6fbd109d48b8e4129bed48fb9a558f3d81"
	I1002 21:53:05.283233 1188335 cri.go:89] found id: "1b693f68ee8dd6e2309e37f4197f27094d4dfaa1d9c6f3ba3d22a7358a840ba1"
	I1002 21:53:05.283237 1188335 cri.go:89] found id: "a87746724133411149d8880820420b63e43012165b82087926e62ccf4703f7e5"
	I1002 21:53:05.283240 1188335 cri.go:89] found id: ""
	I1002 21:53:05.283289 1188335 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:53:05.302434 1188335 retry.go:31] will retry after 129.651545ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:53:05Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:53:05.432981 1188335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:53:05.448600 1188335 pause.go:51] kubelet running: false
	I1002 21:53:05.448703 1188335 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:53:05.642779 1188335 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:53:05.642861 1188335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:53:05.717544 1188335 cri.go:89] found id: "dd0b9cee0f7e3c89c78675db4423007b357320668c81cbc1fdc6e2b580b171de"
	I1002 21:53:05.717567 1188335 cri.go:89] found id: "5df3b4c4cd17f23a02364ab3315805996102ae0b9cc8eaa6c40c3633d6493a30"
	I1002 21:53:05.717573 1188335 cri.go:89] found id: "6ede589ba5dbe96da518cad779870f082e644a9d501eb5455c06ab59e4fb8bd8"
	I1002 21:53:05.717577 1188335 cri.go:89] found id: "b46c6d49eaea1a24cb1c3f40313daa77b388cfaf594a9dca66a287e6673de560"
	I1002 21:53:05.717580 1188335 cri.go:89] found id: "9af8e581376283afb448db698b74533910e6b9797b8fec4999d5b566a07a942a"
	I1002 21:53:05.717583 1188335 cri.go:89] found id: "5a83b3b5fdd1829410dd940329aebf65d78fdcbe04a70857ada30dd577e810fe"
	I1002 21:53:05.717586 1188335 cri.go:89] found id: "c3aedcafe119fc8bb499f43ec599913cb04664925ad686de580218aff808d473"
	I1002 21:53:05.717589 1188335 cri.go:89] found id: "3c684fbd5a7c318f98ee71b7292da5395690fd5e263ff9214c18a7732d8adeeb"
	I1002 21:53:05.717597 1188335 cri.go:89] found id: "d36efcb47e31c46319889a12e9d7ca6fbd109d48b8e4129bed48fb9a558f3d81"
	I1002 21:53:05.717604 1188335 cri.go:89] found id: "1b693f68ee8dd6e2309e37f4197f27094d4dfaa1d9c6f3ba3d22a7358a840ba1"
	I1002 21:53:05.717607 1188335 cri.go:89] found id: "a87746724133411149d8880820420b63e43012165b82087926e62ccf4703f7e5"
	I1002 21:53:05.717611 1188335 cri.go:89] found id: ""
	I1002 21:53:05.717662 1188335 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:53:05.728375 1188335 retry.go:31] will retry after 418.84694ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:53:05Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:53:06.147800 1188335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:53:06.161014 1188335 pause.go:51] kubelet running: false
	I1002 21:53:06.161112 1188335 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:53:06.335334 1188335 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:53:06.335465 1188335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:53:06.407446 1188335 cri.go:89] found id: "dd0b9cee0f7e3c89c78675db4423007b357320668c81cbc1fdc6e2b580b171de"
	I1002 21:53:06.407472 1188335 cri.go:89] found id: "5df3b4c4cd17f23a02364ab3315805996102ae0b9cc8eaa6c40c3633d6493a30"
	I1002 21:53:06.407478 1188335 cri.go:89] found id: "6ede589ba5dbe96da518cad779870f082e644a9d501eb5455c06ab59e4fb8bd8"
	I1002 21:53:06.407483 1188335 cri.go:89] found id: "b46c6d49eaea1a24cb1c3f40313daa77b388cfaf594a9dca66a287e6673de560"
	I1002 21:53:06.407486 1188335 cri.go:89] found id: "9af8e581376283afb448db698b74533910e6b9797b8fec4999d5b566a07a942a"
	I1002 21:53:06.407490 1188335 cri.go:89] found id: "5a83b3b5fdd1829410dd940329aebf65d78fdcbe04a70857ada30dd577e810fe"
	I1002 21:53:06.407494 1188335 cri.go:89] found id: "c3aedcafe119fc8bb499f43ec599913cb04664925ad686de580218aff808d473"
	I1002 21:53:06.407497 1188335 cri.go:89] found id: "3c684fbd5a7c318f98ee71b7292da5395690fd5e263ff9214c18a7732d8adeeb"
	I1002 21:53:06.407501 1188335 cri.go:89] found id: "d36efcb47e31c46319889a12e9d7ca6fbd109d48b8e4129bed48fb9a558f3d81"
	I1002 21:53:06.407519 1188335 cri.go:89] found id: "1b693f68ee8dd6e2309e37f4197f27094d4dfaa1d9c6f3ba3d22a7358a840ba1"
	I1002 21:53:06.407526 1188335 cri.go:89] found id: "a87746724133411149d8880820420b63e43012165b82087926e62ccf4703f7e5"
	I1002 21:53:06.407529 1188335 cri.go:89] found id: ""
	I1002 21:53:06.407588 1188335 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:53:06.418321 1188335 retry.go:31] will retry after 307.939985ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:53:06Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:53:06.726885 1188335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:53:06.745436 1188335 pause.go:51] kubelet running: false
	I1002 21:53:06.745521 1188335 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:53:06.909564 1188335 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:53:06.909648 1188335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:53:06.979602 1188335 cri.go:89] found id: "dd0b9cee0f7e3c89c78675db4423007b357320668c81cbc1fdc6e2b580b171de"
	I1002 21:53:06.979626 1188335 cri.go:89] found id: "5df3b4c4cd17f23a02364ab3315805996102ae0b9cc8eaa6c40c3633d6493a30"
	I1002 21:53:06.979634 1188335 cri.go:89] found id: "6ede589ba5dbe96da518cad779870f082e644a9d501eb5455c06ab59e4fb8bd8"
	I1002 21:53:06.979638 1188335 cri.go:89] found id: "b46c6d49eaea1a24cb1c3f40313daa77b388cfaf594a9dca66a287e6673de560"
	I1002 21:53:06.979641 1188335 cri.go:89] found id: "9af8e581376283afb448db698b74533910e6b9797b8fec4999d5b566a07a942a"
	I1002 21:53:06.979646 1188335 cri.go:89] found id: "5a83b3b5fdd1829410dd940329aebf65d78fdcbe04a70857ada30dd577e810fe"
	I1002 21:53:06.979649 1188335 cri.go:89] found id: "c3aedcafe119fc8bb499f43ec599913cb04664925ad686de580218aff808d473"
	I1002 21:53:06.979652 1188335 cri.go:89] found id: "3c684fbd5a7c318f98ee71b7292da5395690fd5e263ff9214c18a7732d8adeeb"
	I1002 21:53:06.979655 1188335 cri.go:89] found id: "d36efcb47e31c46319889a12e9d7ca6fbd109d48b8e4129bed48fb9a558f3d81"
	I1002 21:53:06.979662 1188335 cri.go:89] found id: "1b693f68ee8dd6e2309e37f4197f27094d4dfaa1d9c6f3ba3d22a7358a840ba1"
	I1002 21:53:06.979665 1188335 cri.go:89] found id: "a87746724133411149d8880820420b63e43012165b82087926e62ccf4703f7e5"
	I1002 21:53:06.979669 1188335 cri.go:89] found id: ""
	I1002 21:53:06.979718 1188335 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:53:07.001763 1188335 out.go:203] 
	W1002 21:53:07.004904 1188335 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:53:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:53:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:53:07.005000 1188335 out.go:285] * 
	* 
	W1002 21:53:07.017446 1188335 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:53:07.020471 1188335 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-714101 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-714101
helpers_test.go:243: (dbg) docker inspect old-k8s-version-714101:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67",
	        "Created": "2025-10-02T21:50:34.734644622Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182936,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:51:57.408781147Z",
	            "FinishedAt": "2025-10-02T21:51:56.31189149Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/hostname",
	        "HostsPath": "/var/lib/docker/containers/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/hosts",
	        "LogPath": "/var/lib/docker/containers/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67-json.log",
	        "Name": "/old-k8s-version-714101",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-714101:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-714101",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67",
	                "LowerDir": "/var/lib/docker/overlay2/20a52751eb46976a24fc5abf4becbbbbb7c2efef8e12481f2765ae857910ffd7-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20a52751eb46976a24fc5abf4becbbbbb7c2efef8e12481f2765ae857910ffd7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20a52751eb46976a24fc5abf4becbbbbb7c2efef8e12481f2765ae857910ffd7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20a52751eb46976a24fc5abf4becbbbbb7c2efef8e12481f2765ae857910ffd7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-714101",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-714101/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-714101",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-714101",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-714101",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a46b01e6b24191a2bd3da230c8c9b6b8d1a8e8a6bdf8d7c761cec0d6a056e273",
	            "SandboxKey": "/var/run/docker/netns/a46b01e6b241",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34186"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34187"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34190"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34188"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34189"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-714101": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:be:28:97:4c:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ed3dcf8a95545fb7e9009343422d8cf7e7334b26a46fbfef0ce71c0f5ff11be4",
	                    "EndpointID": "f0cea423ccacf08c73233bc4db84daf085f044c92aa2dcea421c3c4c57ad518b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-714101",
	                        "e7b0b66ac30c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-714101 -n old-k8s-version-714101
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-714101 -n old-k8s-version-714101: exit status 2 (435.777582ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-714101 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-714101 logs -n 25: (1.62176568s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-644857 sudo containerd config dump                                                                                                                                                                                                  │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo crio config                                                                                                                                                                                                             │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ delete  │ -p cilium-644857                                                                                                                                                                                                                              │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │ 02 Oct 25 21:41 UTC │
	│ start   │ -p force-systemd-env-916563 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-916563  │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ force-systemd-flag-987043 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-987043 │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	│ delete  │ -p force-systemd-flag-987043                                                                                                                                                                                                                  │ force-systemd-flag-987043 │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	│ start   │ -p cert-expiration-955864 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-955864    │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	│ delete  │ -p force-systemd-env-916563                                                                                                                                                                                                                   │ force-systemd-env-916563  │ jenkins │ v1.37.0 │ 02 Oct 25 21:49 UTC │ 02 Oct 25 21:49 UTC │
	│ start   │ -p cert-options-769461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:49 UTC │ 02 Oct 25 21:50 UTC │
	│ ssh     │ cert-options-769461 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ ssh     │ -p cert-options-769461 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ delete  │ -p cert-options-769461                                                                                                                                                                                                                        │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ start   │ -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p cert-expiration-955864 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-955864    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-714101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │                     │
	│ stop    │ -p old-k8s-version-714101 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-714101 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:52 UTC │
	│ delete  │ -p cert-expiration-955864                                                                                                                                                                                                                     │ cert-expiration-955864    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-661954         │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │                     │
	│ image   │ old-k8s-version-714101 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ pause   │ -p old-k8s-version-714101 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:51:59
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:51:59.930668 1183602 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:51:59.930838 1183602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:51:59.930870 1183602 out.go:374] Setting ErrFile to fd 2...
	I1002 21:51:59.930890 1183602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:51:59.931179 1183602 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:51:59.931663 1183602 out.go:368] Setting JSON to false
	I1002 21:51:59.932576 1183602 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23657,"bootTime":1759418263,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:51:59.932671 1183602 start.go:140] virtualization:  
	I1002 21:51:59.935808 1183602 out.go:179] * [no-preload-661954] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:51:59.940166 1183602 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:51:59.940279 1183602 notify.go:221] Checking for updates...
	I1002 21:51:59.946446 1183602 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:51:59.949583 1183602 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:51:59.952538 1183602 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:51:59.955520 1183602 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:51:59.958481 1183602 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:51:59.962168 1183602 config.go:182] Loaded profile config "old-k8s-version-714101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 21:51:59.962274 1183602 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:51:59.987677 1183602 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:51:59.987797 1183602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:52:00.177516 1183602 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:52:00.140684711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:52:00.177635 1183602 docker.go:319] overlay module found
	I1002 21:52:00.180998 1183602 out.go:179] * Using the docker driver based on user configuration
	I1002 21:52:00.184243 1183602 start.go:306] selected driver: docker
	I1002 21:52:00.184276 1183602 start.go:936] validating driver "docker" against <nil>
	I1002 21:52:00.184290 1183602 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:52:00.185135 1183602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:52:00.367141 1183602 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:52:00.338128369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:52:00.367340 1183602 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:52:00.367636 1183602 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:52:00.371530 1183602 out.go:179] * Using Docker driver with root privileges
	I1002 21:52:00.376319 1183602 cni.go:84] Creating CNI manager for ""
	I1002 21:52:00.376416 1183602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:52:00.376427 1183602 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:52:00.376515 1183602 start.go:350] cluster config:
	{Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:52:00.383208 1183602 out.go:179] * Starting "no-preload-661954" primary control-plane node in "no-preload-661954" cluster
	I1002 21:52:00.386221 1183602 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:52:00.389486 1183602 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:52:00.393677 1183602 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:52:00.393704 1183602 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:52:00.393892 1183602 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/config.json ...
	I1002 21:52:00.393938 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/config.json: {Name:mkbdb847e5e448aec408b7974fc06806dcf744ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:00.394479 1183602 cache.go:107] acquiring lock: {Name:mk77546a797d48dfa87e4f15444ebfe2ae46de0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.394523 1183602 cache.go:107] acquiring lock: {Name:mkb30203224ed1c1a4b88d93d3aeb9a29d46fb20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.394560 1183602 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 21:52:00.394571 1183602 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.86µs
	I1002 21:52:00.394584 1183602 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 21:52:00.394600 1183602 cache.go:107] acquiring lock: {Name:mk17c8111e11ff4babf675464dda89dffef8dccd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.394625 1183602 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:00.394723 1183602 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1002 21:52:00.394867 1183602 cache.go:107] acquiring lock: {Name:mk232b04a28dc0f5922a8e36bb60d83a371a69dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.394875 1183602 cache.go:107] acquiring lock: {Name:mk2aab2e3052911889ff3d13b07414606ffa2c8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.394952 1183602 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:00.394977 1183602 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:00.395051 1183602 cache.go:107] acquiring lock: {Name:mkb9b4c6e229a9543f9236d679c4b53878bc9ce5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.395088 1183602 cache.go:107] acquiring lock: {Name:mkb1bbde6510d7fb66d3923ec81dcf1545e1aeca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.395141 1183602 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:00.395171 1183602 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:00.395272 1183602 cache.go:107] acquiring lock: {Name:mk783e98a1246826a6f16b0bd25f720d93184154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.395350 1183602 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:00.397362 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:00.398386 1183602 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:00.398557 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:00.398788 1183602 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:00.398935 1183602 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1002 21:52:00.398951 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:00.399262 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:00.458235 1183602 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:52:00.458262 1183602 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:52:00.458277 1183602 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:52:00.458326 1183602 start.go:361] acquireMachinesLock for no-preload-661954: {Name:mk6a385b42202eaf12d2e98c4a7f7a9c153c60e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.458453 1183602 start.go:365] duration metric: took 106.295µs to acquireMachinesLock for "no-preload-661954"
	I1002 21:52:00.458483 1183602 start.go:94] Provisioning new machine with config: &{Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:52:00.458553 1183602 start.go:126] createHost starting for "" (driver="docker")
	I1002 21:51:57.377044 1182747 out.go:252] * Restarting existing docker container for "old-k8s-version-714101" ...
	I1002 21:51:57.377126 1182747 cli_runner.go:164] Run: docker start old-k8s-version-714101
	I1002 21:51:57.737528 1182747 cli_runner.go:164] Run: docker container inspect old-k8s-version-714101 --format={{.State.Status}}
	I1002 21:51:57.760429 1182747 kic.go:430] container "old-k8s-version-714101" state is running.
	I1002 21:51:57.760783 1182747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-714101
	I1002 21:51:57.783976 1182747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/config.json ...
	I1002 21:51:57.784213 1182747 machine.go:93] provisionDockerMachine start ...
	I1002 21:51:57.784277 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:51:57.821820 1182747 main.go:141] libmachine: Using SSH client type: native
	I1002 21:51:57.822239 1182747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34186 <nil> <nil>}
	I1002 21:51:57.822251 1182747 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:51:57.822897 1182747 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 21:52:01.007081 1182747 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-714101
	
	I1002 21:52:01.007113 1182747 ubuntu.go:182] provisioning hostname "old-k8s-version-714101"
	I1002 21:52:01.007184 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:01.044199 1182747 main.go:141] libmachine: Using SSH client type: native
	I1002 21:52:01.044495 1182747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34186 <nil> <nil>}
	I1002 21:52:01.044506 1182747 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-714101 && echo "old-k8s-version-714101" | sudo tee /etc/hostname
	I1002 21:52:01.279327 1182747 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-714101
	
	I1002 21:52:01.279412 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:01.306593 1182747 main.go:141] libmachine: Using SSH client type: native
	I1002 21:52:01.306905 1182747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34186 <nil> <nil>}
	I1002 21:52:01.306928 1182747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-714101' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-714101/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-714101' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:52:01.465821 1182747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:52:01.465849 1182747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:52:01.465868 1182747 ubuntu.go:190] setting up certificates
	I1002 21:52:01.465876 1182747 provision.go:84] configureAuth start
	I1002 21:52:01.465937 1182747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-714101
	I1002 21:52:01.517590 1182747 provision.go:143] copyHostCerts
	I1002 21:52:01.517658 1182747 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:52:01.517702 1182747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:52:01.517773 1182747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:52:01.517888 1182747 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:52:01.517901 1182747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:52:01.517931 1182747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:52:01.517989 1182747 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:52:01.517999 1182747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:52:01.518028 1182747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:52:01.518114 1182747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-714101 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-714101]
	I1002 21:52:02.117854 1182747 provision.go:177] copyRemoteCerts
	I1002 21:52:02.131285 1182747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:52:02.131421 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:02.221107 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:02.414379 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:52:02.457114 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:52:02.499247 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 21:52:02.534308 1182747 provision.go:87] duration metric: took 1.06841828s to configureAuth
	I1002 21:52:02.534336 1182747 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:52:02.534525 1182747 config.go:182] Loaded profile config "old-k8s-version-714101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 21:52:02.534703 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:02.593415 1182747 main.go:141] libmachine: Using SSH client type: native
	I1002 21:52:02.593719 1182747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34186 <nil> <nil>}
	I1002 21:52:02.593733 1182747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:52:03.263384 1182747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:52:03.263407 1182747 machine.go:96] duration metric: took 5.479185025s to provisionDockerMachine
	I1002 21:52:03.263418 1182747 start.go:294] postStartSetup for "old-k8s-version-714101" (driver="docker")
	I1002 21:52:03.263428 1182747 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:52:03.263500 1182747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:52:03.263556 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:03.289628 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:03.396518 1182747 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:52:03.400969 1182747 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:52:03.400995 1182747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:52:03.401021 1182747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:52:03.401083 1182747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:52:03.401171 1182747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:52:03.401278 1182747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:52:03.411693 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:52:03.435368 1182747 start.go:297] duration metric: took 171.935031ms for postStartSetup
	I1002 21:52:03.435483 1182747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:52:03.435551 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:03.463154 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:03.567278 1182747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:52:03.572577 1182747 fix.go:57] duration metric: took 6.220622496s for fixHost
	I1002 21:52:03.572605 1182747 start.go:84] releasing machines lock for "old-k8s-version-714101", held for 6.220674786s
	I1002 21:52:03.572686 1182747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-714101
	I1002 21:52:03.595159 1182747 ssh_runner.go:195] Run: cat /version.json
	I1002 21:52:03.595211 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:03.595428 1182747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:52:03.595489 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:03.631786 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:03.633164 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:03.726146 1182747 ssh_runner.go:195] Run: systemctl --version
	I1002 21:52:03.831847 1182747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:52:03.868822 1182747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:52:03.873888 1182747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:52:03.873959 1182747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:52:03.881419 1182747 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:52:03.881443 1182747 start.go:496] detecting cgroup driver to use...
	I1002 21:52:03.881474 1182747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:52:03.881522 1182747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:52:03.896854 1182747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:52:03.909804 1182747 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:52:03.909894 1182747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:52:03.925281 1182747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:52:03.938545 1182747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:52:04.058668 1182747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:52:04.192039 1182747 docker.go:234] disabling docker service ...
	I1002 21:52:04.192122 1182747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:52:04.209778 1182747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:52:04.223333 1182747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:52:04.339179 1182747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:52:04.448278 1182747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:52:04.461370 1182747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:52:04.475518 1182747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 21:52:04.475631 1182747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:04.484536 1182747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:52:04.484654 1182747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:04.496911 1182747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:04.506896 1182747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:04.515812 1182747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:52:04.524060 1182747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:04.533435 1182747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:04.542077 1182747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:04.550918 1182747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:52:04.558353 1182747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:52:04.565961 1182747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:52:04.681640 1182747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:52:04.808046 1182747 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:52:04.808163 1182747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:52:04.812043 1182747 start.go:564] Will wait 60s for crictl version
	I1002 21:52:04.812150 1182747 ssh_runner.go:195] Run: which crictl
	I1002 21:52:04.815826 1182747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:52:04.840576 1182747 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:52:04.840698 1182747 ssh_runner.go:195] Run: crio --version
	I1002 21:52:04.872175 1182747 ssh_runner.go:195] Run: crio --version
	I1002 21:52:04.905250 1182747 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1002 21:52:00.471333 1183602 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:52:00.471778 1183602 start.go:160] libmachine.API.Create for "no-preload-661954" (driver="docker")
	I1002 21:52:00.471857 1183602 client.go:168] LocalClient.Create starting
	I1002 21:52:00.471943 1183602 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem
	I1002 21:52:00.471987 1183602 main.go:141] libmachine: Decoding PEM data...
	I1002 21:52:00.472001 1183602 main.go:141] libmachine: Parsing certificate...
	I1002 21:52:00.472071 1183602 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem
	I1002 21:52:00.472099 1183602 main.go:141] libmachine: Decoding PEM data...
	I1002 21:52:00.472109 1183602 main.go:141] libmachine: Parsing certificate...
	I1002 21:52:00.472588 1183602 cli_runner.go:164] Run: docker network inspect no-preload-661954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:52:00.505631 1183602 cli_runner.go:211] docker network inspect no-preload-661954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:52:00.505728 1183602 network_create.go:284] running [docker network inspect no-preload-661954] to gather additional debugging logs...
	I1002 21:52:00.505747 1183602 cli_runner.go:164] Run: docker network inspect no-preload-661954
	W1002 21:52:00.551515 1183602 cli_runner.go:211] docker network inspect no-preload-661954 returned with exit code 1
	I1002 21:52:00.551569 1183602 network_create.go:287] error running [docker network inspect no-preload-661954]: docker network inspect no-preload-661954: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-661954 not found
	I1002 21:52:00.551584 1183602 network_create.go:289] output of [docker network inspect no-preload-661954]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-661954 not found
	
	** /stderr **
	I1002 21:52:00.551690 1183602 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:52:00.574029 1183602 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c06a83b4618b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:65:5b:88:54:7f} reservation:<nil>}
	I1002 21:52:00.575178 1183602 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-478a83a9ba8a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:87:10:56:a0:1c} reservation:<nil>}
	I1002 21:52:00.575564 1183602 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb579a1208f3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:a8:08:2c:81:8c} reservation:<nil>}
	I1002 21:52:00.575980 1183602 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ed3dcf8a9554 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:be:30:19:34:2b:70} reservation:<nil>}
	I1002 21:52:00.577289 1183602 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001e4c000}
	I1002 21:52:00.577369 1183602 network_create.go:124] attempt to create docker network no-preload-661954 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 21:52:00.577539 1183602 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-661954 no-preload-661954
	I1002 21:52:00.648734 1183602 network_create.go:108] docker network no-preload-661954 192.168.85.0/24 created
	I1002 21:52:00.648773 1183602 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-661954" container
	I1002 21:52:00.648853 1183602 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:52:00.667103 1183602 cli_runner.go:164] Run: docker volume create no-preload-661954 --label name.minikube.sigs.k8s.io=no-preload-661954 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:52:00.688191 1183602 oci.go:103] Successfully created a docker volume no-preload-661954
	I1002 21:52:00.688281 1183602 cli_runner.go:164] Run: docker run --rm --name no-preload-661954-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-661954 --entrypoint /usr/bin/test -v no-preload-661954:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:52:00.757472 1183602 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1002 21:52:00.757568 1183602 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1002 21:52:00.783533 1183602 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1002 21:52:00.832100 1183602 cache.go:157] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1002 21:52:00.832125 1183602 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 437.528545ms
	I1002 21:52:00.832138 1183602 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1002 21:52:00.850916 1183602 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1002 21:52:00.854124 1183602 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1002 21:52:00.866315 1183602 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1002 21:52:00.872566 1183602 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1002 21:52:01.356098 1183602 oci.go:107] Successfully prepared a docker volume no-preload-661954
	I1002 21:52:01.356130 1183602 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1002 21:52:01.356254 1183602 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:52:01.356359 1183602 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:52:01.452467 1183602 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-661954 --name no-preload-661954 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-661954 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-661954 --network no-preload-661954 --ip 192.168.85.2 --volume no-preload-661954:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:52:01.477767 1183602 cache.go:157] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1002 21:52:01.477796 1183602 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 1.083280846s
	I1002 21:52:01.477808 1183602 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1002 21:52:01.858364 1183602 cache.go:157] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1002 21:52:01.858394 1183602 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.46334274s
	I1002 21:52:01.858406 1183602 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1002 21:52:01.900834 1183602 cache.go:157] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1002 21:52:01.901128 1183602 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.506257774s
	I1002 21:52:01.901150 1183602 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1002 21:52:01.994551 1183602 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Running}}
	I1002 21:52:02.000416 1183602 cache.go:157] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1002 21:52:02.000453 1183602 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.605366125s
	I1002 21:52:02.000467 1183602 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1002 21:52:02.004013 1183602 cache.go:157] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1002 21:52:02.004041 1183602 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.608770909s
	I1002 21:52:02.004053 1183602 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1002 21:52:02.059697 1183602 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:52:02.142206 1183602 cli_runner.go:164] Run: docker exec no-preload-661954 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:52:02.269782 1183602 oci.go:144] the created container "no-preload-661954" has a running status.
	I1002 21:52:02.269810 1183602 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa...
	I1002 21:52:02.766458 1183602 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:52:02.839965 1183602 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:52:02.885926 1183602 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:52:02.885944 1183602 kic_runner.go:114] Args: [docker exec --privileged no-preload-661954 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:52:02.994541 1183602 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:52:03.025775 1183602 machine.go:93] provisionDockerMachine start ...
	I1002 21:52:03.025873 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:03.090274 1183602 main.go:141] libmachine: Using SSH client type: native
	I1002 21:52:03.090625 1183602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34191 <nil> <nil>}
	I1002 21:52:03.090636 1183602 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:52:03.091346 1183602 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 21:52:03.157966 1183602 cache.go:157] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1002 21:52:03.157997 1183602 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.763131996s
	I1002 21:52:03.158009 1183602 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1002 21:52:03.158020 1183602 cache.go:87] Successfully saved all images to host disk.
	I1002 21:52:04.908120 1182747 cli_runner.go:164] Run: docker network inspect old-k8s-version-714101 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:52:04.924276 1182747 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 21:52:04.928515 1182747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:52:04.938717 1182747 kubeadm.go:883] updating cluster {Name:old-k8s-version-714101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-714101 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:52:04.938842 1182747 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 21:52:04.938901 1182747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:52:04.977483 1182747 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:52:04.977508 1182747 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:52:04.977564 1182747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:52:05.007339 1182747 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:52:05.007367 1182747 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:52:05.007376 1182747 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1002 21:52:05.007518 1182747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-714101 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-714101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:52:05.007633 1182747 ssh_runner.go:195] Run: crio config
	I1002 21:52:05.081704 1182747 cni.go:84] Creating CNI manager for ""
	I1002 21:52:05.081781 1182747 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:52:05.081815 1182747 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:52:05.081867 1182747 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-714101 NodeName:old-k8s-version-714101 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:52:05.082132 1182747 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-714101"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:52:05.082225 1182747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1002 21:52:05.091115 1182747 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:52:05.091221 1182747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:52:05.099165 1182747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1002 21:52:05.112704 1182747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:52:05.126070 1182747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1002 21:52:05.140406 1182747 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:52:05.144163 1182747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:52:05.154420 1182747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:52:05.268809 1182747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:52:05.285244 1182747 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101 for IP: 192.168.76.2
	I1002 21:52:05.285265 1182747 certs.go:195] generating shared ca certs ...
	I1002 21:52:05.285281 1182747 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:05.285441 1182747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:52:05.285506 1182747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:52:05.285522 1182747 certs.go:257] generating profile certs ...
	I1002 21:52:05.285636 1182747 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.key
	I1002 21:52:05.285716 1182747 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/apiserver.key.36c72c45
	I1002 21:52:05.285757 1182747 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/proxy-client.key
	I1002 21:52:05.285893 1182747 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:52:05.285938 1182747 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:52:05.285950 1182747 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:52:05.285979 1182747 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:52:05.286015 1182747 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:52:05.286077 1182747 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:52:05.286129 1182747 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:52:05.286800 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:52:05.308492 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:52:05.326935 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:52:05.345250 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:52:05.363787 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1002 21:52:05.392773 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:52:05.417673 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:52:05.445533 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:52:05.464549 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:52:05.488914 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:52:05.510479 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:52:05.529235 1182747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:52:05.542917 1182747 ssh_runner.go:195] Run: openssl version
	I1002 21:52:05.549087 1182747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:52:05.557773 1182747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:52:05.561509 1182747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:52:05.561572 1182747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:52:05.603241 1182747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:52:05.611484 1182747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:52:05.619995 1182747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:52:05.624535 1182747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:52:05.624603 1182747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:52:05.665555 1182747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:52:05.673492 1182747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:52:05.681686 1182747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:52:05.685348 1182747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:52:05.685413 1182747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:52:05.726677 1182747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:52:05.734486 1182747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:52:05.738318 1182747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:52:05.779220 1182747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:52:05.820214 1182747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:52:05.860965 1182747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:52:05.903878 1182747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:52:05.949837 1182747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:52:06.006706 1182747 kubeadm.go:400] StartCluster: {Name:old-k8s-version-714101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-714101 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:52:06.006806 1182747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:52:06.006897 1182747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:52:06.132960 1182747 cri.go:89] found id: "5a83b3b5fdd1829410dd940329aebf65d78fdcbe04a70857ada30dd577e810fe"
	I1002 21:52:06.132987 1182747 cri.go:89] found id: "c3aedcafe119fc8bb499f43ec599913cb04664925ad686de580218aff808d473"
	I1002 21:52:06.132992 1182747 cri.go:89] found id: "3c684fbd5a7c318f98ee71b7292da5395690fd5e263ff9214c18a7732d8adeeb"
	I1002 21:52:06.133004 1182747 cri.go:89] found id: "d36efcb47e31c46319889a12e9d7ca6fbd109d48b8e4129bed48fb9a558f3d81"
	I1002 21:52:06.133007 1182747 cri.go:89] found id: ""
	I1002 21:52:06.133055 1182747 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 21:52:06.157820 1182747 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:52:06Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:52:06.157914 1182747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:52:06.167967 1182747 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:52:06.167990 1182747 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:52:06.168051 1182747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:52:06.178260 1182747 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:52:06.178722 1182747 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-714101" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:52:06.178844 1182747 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-992084/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-714101" cluster setting kubeconfig missing "old-k8s-version-714101" context setting]
	I1002 21:52:06.179174 1182747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:06.180783 1182747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:52:06.197499 1182747 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 21:52:06.197544 1182747 kubeadm.go:601] duration metric: took 29.547421ms to restartPrimaryControlPlane
	I1002 21:52:06.197555 1182747 kubeadm.go:402] duration metric: took 190.86711ms to StartCluster
	I1002 21:52:06.197570 1182747 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:06.197655 1182747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:52:06.198353 1182747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:06.198595 1182747 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:52:06.199024 1182747 config.go:182] Loaded profile config "old-k8s-version-714101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 21:52:06.199018 1182747 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:52:06.199138 1182747 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-714101"
	I1002 21:52:06.199152 1182747 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-714101"
	W1002 21:52:06.199169 1182747 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:52:06.199193 1182747 host.go:66] Checking if "old-k8s-version-714101" exists ...
	I1002 21:52:06.199634 1182747 cli_runner.go:164] Run: docker container inspect old-k8s-version-714101 --format={{.State.Status}}
	I1002 21:52:06.199810 1182747 addons.go:69] Setting dashboard=true in profile "old-k8s-version-714101"
	I1002 21:52:06.199850 1182747 addons.go:238] Setting addon dashboard=true in "old-k8s-version-714101"
	W1002 21:52:06.199881 1182747 addons.go:247] addon dashboard should already be in state true
	I1002 21:52:06.199920 1182747 host.go:66] Checking if "old-k8s-version-714101" exists ...
	I1002 21:52:06.200130 1182747 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-714101"
	I1002 21:52:06.200155 1182747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-714101"
	I1002 21:52:06.200410 1182747 cli_runner.go:164] Run: docker container inspect old-k8s-version-714101 --format={{.State.Status}}
	I1002 21:52:06.200465 1182747 cli_runner.go:164] Run: docker container inspect old-k8s-version-714101 --format={{.State.Status}}
	I1002 21:52:06.203676 1182747 out.go:179] * Verifying Kubernetes components...
	I1002 21:52:06.209938 1182747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:52:06.290867 1182747 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 21:52:06.290927 1182747 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:06.294211 1182747 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:52:06.294226 1182747 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-714101"
	W1002 21:52:06.294243 1182747 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:52:06.294268 1182747 host.go:66] Checking if "old-k8s-version-714101" exists ...
	I1002 21:52:06.295812 1182747 cli_runner.go:164] Run: docker container inspect old-k8s-version-714101 --format={{.State.Status}}
	I1002 21:52:06.294231 1182747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:52:06.296065 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:06.304292 1182747 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 21:52:06.310910 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 21:52:06.310937 1182747 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 21:52:06.311020 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:06.372421 1182747 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:52:06.372441 1182747 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:52:06.372508 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:06.373915 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:06.374429 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:06.411460 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:06.632669 1182747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:52:06.690703 1182747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:52:06.699296 1182747 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-714101" to be "Ready" ...
	I1002 21:52:06.726586 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 21:52:06.726611 1182747 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 21:52:06.766697 1182747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:52:06.782303 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 21:52:06.782324 1182747 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 21:52:06.910398 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 21:52:06.910422 1182747 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 21:52:06.282412 1183602 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-661954
	
	I1002 21:52:06.282436 1183602 ubuntu.go:182] provisioning hostname "no-preload-661954"
	I1002 21:52:06.282502 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:06.335705 1183602 main.go:141] libmachine: Using SSH client type: native
	I1002 21:52:06.336022 1183602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34191 <nil> <nil>}
	I1002 21:52:06.336034 1183602 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-661954 && echo "no-preload-661954" | sudo tee /etc/hostname
	I1002 21:52:06.559460 1183602 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-661954
	
	I1002 21:52:06.559533 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:06.580631 1183602 main.go:141] libmachine: Using SSH client type: native
	I1002 21:52:06.580940 1183602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34191 <nil> <nil>}
	I1002 21:52:06.580957 1183602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-661954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-661954/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-661954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:52:06.750628 1183602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:52:06.750708 1183602 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:52:06.750746 1183602 ubuntu.go:190] setting up certificates
	I1002 21:52:06.750786 1183602 provision.go:84] configureAuth start
	I1002 21:52:06.750890 1183602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-661954
	I1002 21:52:06.782565 1183602 provision.go:143] copyHostCerts
	I1002 21:52:06.782627 1183602 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:52:06.782637 1183602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:52:06.782720 1183602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:52:06.782811 1183602 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:52:06.782817 1183602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:52:06.782842 1183602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:52:06.782892 1183602 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:52:06.782897 1183602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:52:06.782919 1183602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:52:06.782973 1183602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.no-preload-661954 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-661954]
	I1002 21:52:07.005870 1183602 provision.go:177] copyRemoteCerts
	I1002 21:52:07.005996 1183602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:52:07.006091 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:07.030249 1183602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:52:07.136071 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:52:07.171937 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 21:52:07.200316 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:52:07.236860 1183602 provision.go:87] duration metric: took 486.034622ms to configureAuth
	I1002 21:52:07.236891 1183602 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:52:07.237070 1183602 config.go:182] Loaded profile config "no-preload-661954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:52:07.237174 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:07.259036 1183602 main.go:141] libmachine: Using SSH client type: native
	I1002 21:52:07.259426 1183602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34191 <nil> <nil>}
	I1002 21:52:07.259474 1183602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:52:07.627312 1183602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:52:07.627386 1183602 machine.go:96] duration metric: took 4.601588502s to provisionDockerMachine
	I1002 21:52:07.627411 1183602 client.go:171] duration metric: took 7.15554567s to LocalClient.Create
	I1002 21:52:07.627437 1183602 start.go:168] duration metric: took 7.155661236s to libmachine.API.Create "no-preload-661954"
	I1002 21:52:07.627480 1183602 start.go:294] postStartSetup for "no-preload-661954" (driver="docker")
	I1002 21:52:07.627503 1183602 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:52:07.627599 1183602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:52:07.627671 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:07.658223 1183602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:52:07.779630 1183602 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:52:07.783328 1183602 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:52:07.783359 1183602 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:52:07.783371 1183602 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:52:07.783438 1183602 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:52:07.783522 1183602 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:52:07.783630 1183602 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:52:07.795930 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:52:07.823880 1183602 start.go:297] duration metric: took 196.372991ms for postStartSetup
	I1002 21:52:07.824240 1183602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-661954
	I1002 21:52:07.851385 1183602 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/config.json ...
	I1002 21:52:07.851661 1183602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:52:07.851700 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:07.890144 1183602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:52:07.998431 1183602 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:52:08.010193 1183602 start.go:129] duration metric: took 7.551624447s to createHost
	I1002 21:52:08.010234 1183602 start.go:84] releasing machines lock for "no-preload-661954", held for 7.551758983s
	I1002 21:52:08.010307 1183602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-661954
	I1002 21:52:08.031230 1183602 ssh_runner.go:195] Run: cat /version.json
	I1002 21:52:08.031285 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:08.031354 1183602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:52:08.031428 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:08.052848 1183602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:52:08.075531 1183602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:52:08.178315 1183602 ssh_runner.go:195] Run: systemctl --version
	I1002 21:52:08.291334 1183602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:52:08.337419 1183602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:52:08.342412 1183602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:52:08.342482 1183602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:52:08.393326 1183602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 21:52:08.393349 1183602 start.go:496] detecting cgroup driver to use...
	I1002 21:52:08.393380 1183602 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:52:08.393436 1183602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:52:08.412206 1183602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:52:08.425781 1183602 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:52:08.425842 1183602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:52:08.444808 1183602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:52:08.463810 1183602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:52:08.679064 1183602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:52:08.898670 1183602 docker.go:234] disabling docker service ...
	I1002 21:52:08.898742 1183602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:52:08.942406 1183602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:52:08.960859 1183602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:52:09.159789 1183602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:52:09.371617 1183602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:52:09.392037 1183602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:52:09.421836 1183602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:52:09.421959 1183602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:09.448449 1183602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:52:09.448577 1183602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:09.466577 1183602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:09.484718 1183602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:09.496716 1183602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:52:09.510273 1183602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:09.524773 1183602 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:09.547373 1183602 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:09.557919 1183602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:52:09.567316 1183602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:52:09.579961 1183602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:52:09.785054 1183602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:52:09.964271 1183602 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:52:09.964413 1183602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:52:09.970450 1183602 start.go:564] Will wait 60s for crictl version
	I1002 21:52:09.970566 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:09.976759 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:52:10.031709 1183602 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:52:10.031817 1183602 ssh_runner.go:195] Run: crio --version
	I1002 21:52:10.089399 1183602 ssh_runner.go:195] Run: crio --version
	I1002 21:52:10.147130 1183602 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:52:07.047321 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 21:52:07.047346 1182747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 21:52:07.115188 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 21:52:07.115213 1182747 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 21:52:07.206364 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 21:52:07.206391 1182747 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 21:52:07.275539 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 21:52:07.275565 1182747 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 21:52:07.303139 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 21:52:07.303167 1182747 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 21:52:07.351056 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:52:07.351082 1182747 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 21:52:07.389460 1182747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:52:10.150001 1183602 cli_runner.go:164] Run: docker network inspect no-preload-661954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:52:10.183806 1183602 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 21:52:10.188228 1183602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:52:10.204098 1183602 kubeadm.go:883] updating cluster {Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:52:10.204206 1183602 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:52:10.204248 1183602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:52:10.242487 1183602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1002 21:52:10.242509 1183602 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 21:52:10.242544 1183602 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:10.242733 1183602 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:10.242821 1183602 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:10.242900 1183602 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:10.242971 1183602 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:10.243040 1183602 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1002 21:52:10.243116 1183602 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:10.243196 1183602 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:10.246055 1183602 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:10.246307 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:10.246440 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:10.246559 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:10.246673 1183602 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:10.246965 1183602 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1002 21:52:10.247108 1183602 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:10.247290 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:10.463117 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:10.492446 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:10.519939 1183602 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1002 21:52:10.519984 1183602 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:10.520040 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:10.520583 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:10.540281 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:10.541174 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1002 21:52:10.557997 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:10.561529 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:10.632637 1183602 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1002 21:52:10.632699 1183602 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:10.632761 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:10.660771 1183602 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1002 21:52:10.660898 1183602 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:10.660855 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:10.660981 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:10.797576 1183602 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1002 21:52:10.797657 1183602 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:10.797736 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:10.797843 1183602 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1002 21:52:10.797879 1183602 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1002 21:52:10.797921 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:10.798020 1183602 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1002 21:52:10.798087 1183602 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:10.798131 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:10.831655 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:10.831734 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:10.831800 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:10.831898 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:10.831946 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:10.831972 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1002 21:52:10.832182 1183602 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1002 21:52:10.832227 1183602 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:10.832264 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:10.991325 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:10.991412 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:10.991471 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:10.991526 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:10.991576 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:10.991634 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:10.991682 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1002 21:52:11.204821 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1002 21:52:11.204967 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:11.205049 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:11.205134 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:11.205209 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1002 21:52:11.205303 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1002 21:52:11.205391 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:11.205485 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:11.409976 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1002 21:52:11.410105 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:11.410164 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1002 21:52:11.410206 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1002 21:52:11.410005 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1002 21:52:11.410278 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1002 21:52:11.410332 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1002 21:52:11.410353 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1002 21:52:11.410413 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1002 21:52:11.410446 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1002 21:52:11.410481 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1002 21:52:11.410494 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1002 21:52:11.410602 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1002 21:52:11.538914 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1002 21:52:11.538964 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1002 21:52:11.539045 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1002 21:52:11.539137 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1002 21:52:11.539197 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1002 21:52:11.539215 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1002 21:52:11.539265 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1002 21:52:11.539289 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1002 21:52:11.539357 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1002 21:52:11.539374 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1002 21:52:11.539435 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1002 21:52:11.539454 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	W1002 21:52:11.576414 1183602 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1002 21:52:11.576622 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:11.635994 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1002 21:52:11.636038 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1002 21:52:11.711734 1183602 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1002 21:52:11.711838 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1002 21:52:11.840730 1183602 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1002 21:52:11.840778 1183602 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:11.840828 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:12.171791 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:12.171867 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1002 21:52:12.321584 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:12.368610 1183602 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1002 21:52:12.368694 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1002 21:52:12.562198 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:12.606117 1182747 node_ready.go:49] node "old-k8s-version-714101" is "Ready"
	I1002 21:52:12.606146 1182747 node_ready.go:38] duration metric: took 5.906817679s for node "old-k8s-version-714101" to be "Ready" ...
	I1002 21:52:12.606159 1182747 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:52:12.606219 1182747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:52:15.820626 1182747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.129890195s)
	I1002 21:52:15.820697 1182747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.053981117s)
	I1002 21:52:16.516859 1182747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.127353559s)
	I1002 21:52:16.517072 1182747 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.910836562s)
	I1002 21:52:16.517100 1182747 api_server.go:72] duration metric: took 10.318460748s to wait for apiserver process to appear ...
	I1002 21:52:16.517106 1182747 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:52:16.517122 1182747 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:52:16.520395 1182747 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-714101 addons enable metrics-server
	
	I1002 21:52:16.523493 1182747 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1002 21:52:16.526519 1182747 addons.go:514] duration metric: took 10.3274936s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1002 21:52:16.528431 1182747 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 21:52:16.529944 1182747 api_server.go:141] control plane version: v1.28.0
	I1002 21:52:16.529962 1182747 api_server.go:131] duration metric: took 12.849901ms to wait for apiserver health ...
	I1002 21:52:16.529971 1182747 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:52:16.543566 1182747 system_pods.go:59] 8 kube-system pods found
	I1002 21:52:16.543654 1182747 system_pods.go:61] "coredns-5dd5756b68-f7qdk" [848cb78b-98da-49f0-ab85-a772e528b803] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:52:16.543682 1182747 system_pods.go:61] "etcd-old-k8s-version-714101" [0966d28a-21e6-417e-8aed-41590aa75beb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:52:16.543723 1182747 system_pods.go:61] "kindnet-qgs2b" [4f2179e4-429f-4a72-886a-c6a3e321a396] Running
	I1002 21:52:16.543752 1182747 system_pods.go:61] "kube-apiserver-old-k8s-version-714101" [b247aaad-25ed-457f-ad85-afbaccf7bc72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:52:16.543776 1182747 system_pods.go:61] "kube-controller-manager-old-k8s-version-714101" [7ea997bf-4afe-409e-bcb9-ea894e8f83e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:52:16.543812 1182747 system_pods.go:61] "kube-proxy-9ktm4" [902dc118-e33e-4d60-8711-8394ffefed71] Running
	I1002 21:52:16.543848 1182747 system_pods.go:61] "kube-scheduler-old-k8s-version-714101" [3d6d0916-6ad4-4a46-ba43-bb0812e6ccd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:52:16.543868 1182747 system_pods.go:61] "storage-provisioner" [84b9ee34-40ec-4d3f-9171-c7a8578abb2b] Running
	I1002 21:52:16.543900 1182747 system_pods.go:74] duration metric: took 13.922784ms to wait for pod list to return data ...
	I1002 21:52:16.543926 1182747 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:52:16.552592 1182747 default_sa.go:45] found service account: "default"
	I1002 21:52:16.552612 1182747 default_sa.go:55] duration metric: took 8.668112ms for default service account to be created ...
	I1002 21:52:16.552621 1182747 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:52:16.558730 1182747 system_pods.go:86] 8 kube-system pods found
	I1002 21:52:16.558758 1182747 system_pods.go:89] "coredns-5dd5756b68-f7qdk" [848cb78b-98da-49f0-ab85-a772e528b803] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:52:16.558767 1182747 system_pods.go:89] "etcd-old-k8s-version-714101" [0966d28a-21e6-417e-8aed-41590aa75beb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:52:16.558773 1182747 system_pods.go:89] "kindnet-qgs2b" [4f2179e4-429f-4a72-886a-c6a3e321a396] Running
	I1002 21:52:16.558780 1182747 system_pods.go:89] "kube-apiserver-old-k8s-version-714101" [b247aaad-25ed-457f-ad85-afbaccf7bc72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:52:16.558786 1182747 system_pods.go:89] "kube-controller-manager-old-k8s-version-714101" [7ea997bf-4afe-409e-bcb9-ea894e8f83e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:52:16.558791 1182747 system_pods.go:89] "kube-proxy-9ktm4" [902dc118-e33e-4d60-8711-8394ffefed71] Running
	I1002 21:52:16.558797 1182747 system_pods.go:89] "kube-scheduler-old-k8s-version-714101" [3d6d0916-6ad4-4a46-ba43-bb0812e6ccd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:52:16.558800 1182747 system_pods.go:89] "storage-provisioner" [84b9ee34-40ec-4d3f-9171-c7a8578abb2b] Running
	I1002 21:52:16.558808 1182747 system_pods.go:126] duration metric: took 6.180555ms to wait for k8s-apps to be running ...
	I1002 21:52:16.558815 1182747 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:52:16.558870 1182747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:52:16.583260 1182747 system_svc.go:56] duration metric: took 24.433858ms WaitForService to wait for kubelet
	I1002 21:52:16.583289 1182747 kubeadm.go:586] duration metric: took 10.384658164s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:52:16.583311 1182747 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:52:16.588814 1182747 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:52:16.588852 1182747 node_conditions.go:123] node cpu capacity is 2
	I1002 21:52:16.588865 1182747 node_conditions.go:105] duration metric: took 5.548711ms to run NodePressure ...
	I1002 21:52:16.588878 1182747 start.go:242] waiting for startup goroutines ...
	I1002 21:52:16.588886 1182747 start.go:247] waiting for cluster config update ...
	I1002 21:52:16.588897 1182747 start.go:256] writing updated cluster config ...
	I1002 21:52:16.589162 1182747 ssh_runner.go:195] Run: rm -f paused
	I1002 21:52:16.597225 1182747 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:52:16.607644 1182747 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-f7qdk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:15.599272 1183602 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (3.230549708s)
	I1002 21:52:15.599305 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1002 21:52:15.599322 1183602 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1002 21:52:15.599378 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1002 21:52:15.599438 1183602 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.037215874s)
	I1002 21:52:15.599486 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 21:52:15.599566 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1002 21:52:17.568676 1183602 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.969082899s)
	I1002 21:52:17.568711 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1002 21:52:17.568735 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1002 21:52:17.568870 1183602 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.969478335s)
	I1002 21:52:17.568887 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1002 21:52:17.568903 1183602 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1002 21:52:17.568959 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1002 21:52:19.014580 1183602 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.445601107s)
	I1002 21:52:19.014609 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1002 21:52:19.014629 1183602 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1002 21:52:19.014677 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	W1002 21:52:18.614599 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	W1002 21:52:20.618578 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	I1002 21:52:20.188891 1183602 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.174185049s)
	I1002 21:52:20.188919 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1002 21:52:20.188938 1183602 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1002 21:52:20.188984 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1002 21:52:21.615829 1183602 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.426811443s)
	I1002 21:52:21.615858 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1002 21:52:21.615883 1183602 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1002 21:52:21.615929 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	W1002 21:52:23.114918 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	W1002 21:52:25.115584 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	I1002 21:52:25.272273 1183602 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.656315467s)
	I1002 21:52:25.272299 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1002 21:52:25.272317 1183602 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1002 21:52:25.272369 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1002 21:52:25.902517 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1002 21:52:25.902550 1183602 cache_images.go:124] Successfully loaded all cached images
	I1002 21:52:25.902556 1183602 cache_images.go:93] duration metric: took 15.660034703s to LoadCachedImages
	I1002 21:52:25.902566 1183602 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 21:52:25.902654 1183602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-661954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:52:25.902734 1183602 ssh_runner.go:195] Run: crio config
	I1002 21:52:25.973838 1183602 cni.go:84] Creating CNI manager for ""
	I1002 21:52:25.973869 1183602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:52:25.973888 1183602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:52:25.973912 1183602 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-661954 NodeName:no-preload-661954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:52:25.974063 1183602 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-661954"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:52:25.974145 1183602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:52:25.982616 1183602 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1002 21:52:25.982679 1183602 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1002 21:52:25.990991 1183602 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1002 21:52:25.991116 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1002 21:52:25.991940 1183602 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1002 21:52:25.991939 1183602 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1002 21:52:25.996153 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1002 21:52:25.996207 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1002 21:52:27.259110 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1002 21:52:27.263321 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1002 21:52:27.263378 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1002 21:52:27.267502 1183602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:52:27.303800 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1002 21:52:27.331735 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1002 21:52:27.331830 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1002 21:52:28.076956 1183602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:52:28.104105 1183602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 21:52:28.134631 1183602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:52:28.162166 1183602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1002 21:52:28.193332 1183602 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:52:28.203263 1183602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:52:28.214521 1183602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:52:28.384599 1183602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:52:28.408688 1183602 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954 for IP: 192.168.85.2
	I1002 21:52:28.408761 1183602 certs.go:195] generating shared ca certs ...
	I1002 21:52:28.408792 1183602 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:28.408984 1183602 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:52:28.409066 1183602 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:52:28.409094 1183602 certs.go:257] generating profile certs ...
	I1002 21:52:28.409177 1183602 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.key
	I1002 21:52:28.409219 1183602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt with IP's: []
	I1002 21:52:29.200941 1183602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt ...
	I1002 21:52:29.200970 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: {Name:mk3043b5efd47e137543aa61b0e942b7285caeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:29.201147 1183602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.key ...
	I1002 21:52:29.201163 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.key: {Name:mk0fc54489a5bd53f8de9284e56b6b1960465035 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:29.201244 1183602 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key.ffe6e5b4
	I1002 21:52:29.201264 1183602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.crt.ffe6e5b4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 21:52:30.269819 1183602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.crt.ffe6e5b4 ...
	I1002 21:52:30.269853 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.crt.ffe6e5b4: {Name:mke6b2152d98df0dbd9b59ac789c14469c552e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:30.270076 1183602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key.ffe6e5b4 ...
	I1002 21:52:30.270093 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key.ffe6e5b4: {Name:mk566bf146cd3961f59f68f90b034cfea289f2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:30.270196 1183602 certs.go:382] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.crt.ffe6e5b4 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.crt
	I1002 21:52:30.270279 1183602 certs.go:386] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key.ffe6e5b4 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key
	I1002 21:52:30.270344 1183602 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.key
	I1002 21:52:30.270363 1183602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.crt with IP's: []
	I1002 21:52:31.099977 1183602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.crt ...
	I1002 21:52:31.100051 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.crt: {Name:mkd6fffa7f1694d88a53de97fbabde36d0fd81bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:31.100280 1183602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.key ...
	I1002 21:52:31.100316 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.key: {Name:mk3ce58ca0666401bdf7ac0d09ce5e4906dda4e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:31.100585 1183602 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:52:31.100654 1183602 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:52:31.100696 1183602 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:52:31.100750 1183602 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:52:31.100809 1183602 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:52:31.100855 1183602 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:52:31.100941 1183602 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:52:31.101581 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:52:31.136493 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:52:31.157145 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:52:31.177036 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:52:31.195051 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:52:31.212376 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:52:31.229859 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:52:31.248452 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:52:31.265622 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:52:31.283263 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:52:31.300415 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:52:31.320782 1183602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:52:31.333539 1183602 ssh_runner.go:195] Run: openssl version
	I1002 21:52:31.346531 1183602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:52:31.355545 1183602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:52:31.360535 1183602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:52:31.360647 1183602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:52:31.403791 1183602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:52:31.411967 1183602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:52:31.423398 1183602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:52:31.427507 1183602 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:52:31.427621 1183602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:52:31.486535 1183602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:52:31.494999 1183602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:52:31.503346 1183602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:52:31.516373 1183602 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:52:31.516501 1183602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:52:31.559691 1183602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:52:31.576553 1183602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:52:31.588264 1183602 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:52:31.588385 1183602 kubeadm.go:400] StartCluster: {Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:52:31.588485 1183602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:52:31.588579 1183602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:52:31.660615 1183602 cri.go:89] found id: ""
	I1002 21:52:31.660763 1183602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:52:31.673220 1183602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:52:31.687368 1183602 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:52:31.687479 1183602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:52:31.700264 1183602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:52:31.700322 1183602 kubeadm.go:157] found existing configuration files:
	
	I1002 21:52:31.700415 1183602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:52:31.709633 1183602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:52:31.709743 1183602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:52:31.721421 1183602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:52:31.735792 1183602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:52:31.735924 1183602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:52:31.743148 1183602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:52:31.752173 1183602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:52:31.752241 1183602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:52:31.763997 1183602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:52:31.772498 1183602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:52:31.772629 1183602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:52:31.782422 1183602 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:52:31.832657 1183602 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:52:31.833085 1183602 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:52:31.892150 1183602 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:52:31.892321 1183602 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:52:31.892393 1183602 kubeadm.go:318] OS: Linux
	I1002 21:52:31.892478 1183602 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:52:31.892567 1183602 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:52:31.892675 1183602 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:52:31.892772 1183602 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:52:31.892874 1183602 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:52:31.892971 1183602 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:52:31.893059 1183602 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:52:31.893154 1183602 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:52:31.893243 1183602 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:52:32.005768 1183602 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:52:32.005885 1183602 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:52:32.005981 1183602 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:52:32.029562 1183602 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1002 21:52:27.623145 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	W1002 21:52:30.120811 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	I1002 21:52:32.036102 1183602 out.go:252]   - Generating certificates and keys ...
	I1002 21:52:32.036214 1183602 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:52:32.036287 1183602 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:52:32.362394 1183602 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:52:32.766331 1183602 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:52:33.485776 1183602 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:52:34.042469 1183602 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:52:34.633016 1183602 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:52:34.634435 1183602 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-661954] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 21:52:34.748402 1183602 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:52:34.750393 1183602 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-661954] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 21:52:34.843060 1183602 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	W1002 21:52:32.616702 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	W1002 21:52:34.631227 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	I1002 21:52:35.398425 1183602 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:52:35.711364 1183602 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:52:35.711748 1183602 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:52:36.019472 1183602 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:52:36.248579 1183602 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:52:36.522602 1183602 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:52:37.203209 1183602 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:52:37.999514 1183602 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:52:38.000356 1183602 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:52:38.003130 1183602 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:52:38.007414 1183602 out.go:252]   - Booting up control plane ...
	I1002 21:52:38.007542 1183602 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:52:38.007630 1183602 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:52:38.007705 1183602 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:52:38.025758 1183602 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:52:38.025872 1183602 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:52:38.034878 1183602 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:52:38.035560 1183602 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:52:38.035847 1183602 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:52:38.213509 1183602 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:52:38.213642 1183602 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:52:39.215095 1183602 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001654449s
	I1002 21:52:39.218499 1183602 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:52:39.218599 1183602 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 21:52:39.218909 1183602 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:52:39.218999 1183602 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1002 21:52:37.119201 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	W1002 21:52:39.614815 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	W1002 21:52:41.625232 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	I1002 21:52:43.389464 1183602 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.170574362s
	I1002 21:52:44.643428 1183602 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.424872283s
	I1002 21:52:46.725656 1183602 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.505046261s
	I1002 21:52:46.744381 1183602 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:52:46.761249 1183602 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:52:46.775924 1183602 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:52:46.776156 1183602 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-661954 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 21:52:46.788301 1183602 kubeadm.go:318] [bootstrap-token] Using token: di0bi5.u1ybuxaty6dqdvqe
	W1002 21:52:44.114536 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	W1002 21:52:46.612987 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	I1002 21:52:46.791209 1183602 out.go:252]   - Configuring RBAC rules ...
	I1002 21:52:46.791347 1183602 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:52:46.795767 1183602 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:52:46.804818 1183602 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:52:46.811490 1183602 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:52:46.815607 1183602 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:52:46.820259 1183602 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:52:47.131604 1183602 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:52:47.590451 1183602 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 21:52:48.130678 1183602 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 21:52:48.133160 1183602 kubeadm.go:318] 
	I1002 21:52:48.133246 1183602 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 21:52:48.133261 1183602 kubeadm.go:318] 
	I1002 21:52:48.133343 1183602 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 21:52:48.133355 1183602 kubeadm.go:318] 
	I1002 21:52:48.133382 1183602 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 21:52:48.133471 1183602 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:52:48.133571 1183602 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:52:48.133585 1183602 kubeadm.go:318] 
	I1002 21:52:48.133649 1183602 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 21:52:48.133655 1183602 kubeadm.go:318] 
	I1002 21:52:48.133729 1183602 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 21:52:48.133735 1183602 kubeadm.go:318] 
	I1002 21:52:48.133800 1183602 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 21:52:48.133914 1183602 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:52:48.134003 1183602 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:52:48.134013 1183602 kubeadm.go:318] 
	I1002 21:52:48.134132 1183602 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:52:48.134255 1183602 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 21:52:48.134267 1183602 kubeadm.go:318] 
	I1002 21:52:48.134377 1183602 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token di0bi5.u1ybuxaty6dqdvqe \
	I1002 21:52:48.134517 1183602 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 \
	I1002 21:52:48.134551 1183602 kubeadm.go:318] 	--control-plane 
	I1002 21:52:48.134566 1183602 kubeadm.go:318] 
	I1002 21:52:48.134672 1183602 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:52:48.134683 1183602 kubeadm.go:318] 
	I1002 21:52:48.134794 1183602 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token di0bi5.u1ybuxaty6dqdvqe \
	I1002 21:52:48.134923 1183602 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 
	I1002 21:52:48.138480 1183602 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:52:48.138728 1183602 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:52:48.138848 1183602 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:52:48.138870 1183602 cni.go:84] Creating CNI manager for ""
	I1002 21:52:48.138878 1183602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:52:48.142100 1183602 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 21:52:48.145059 1183602 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:52:48.151414 1183602 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 21:52:48.151439 1183602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 21:52:48.166991 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:52:48.474839 1183602 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:52:48.475012 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:48.475111 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-661954 minikube.k8s.io/updated_at=2025_10_02T21_52_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b minikube.k8s.io/name=no-preload-661954 minikube.k8s.io/primary=true
	I1002 21:52:48.708022 1183602 ops.go:34] apiserver oom_adj: -16
	I1002 21:52:48.708127 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:49.209066 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:49.708269 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1002 21:52:48.613956 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	I1002 21:52:51.114910 1182747 pod_ready.go:94] pod "coredns-5dd5756b68-f7qdk" is "Ready"
	I1002 21:52:51.114943 1182747 pod_ready.go:86] duration metric: took 34.507271712s for pod "coredns-5dd5756b68-f7qdk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.118109 1182747 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.123967 1182747 pod_ready.go:94] pod "etcd-old-k8s-version-714101" is "Ready"
	I1002 21:52:51.124005 1182747 pod_ready.go:86] duration metric: took 5.868326ms for pod "etcd-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.127901 1182747 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.133651 1182747 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-714101" is "Ready"
	I1002 21:52:51.133687 1182747 pod_ready.go:86] duration metric: took 5.716345ms for pod "kube-apiserver-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.137091 1182747 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.311463 1182747 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-714101" is "Ready"
	I1002 21:52:51.311493 1182747 pod_ready.go:86] duration metric: took 174.3747ms for pod "kube-controller-manager-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.512464 1182747 pod_ready.go:83] waiting for pod "kube-proxy-9ktm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.911836 1182747 pod_ready.go:94] pod "kube-proxy-9ktm4" is "Ready"
	I1002 21:52:51.911866 1182747 pod_ready.go:86] duration metric: took 399.373304ms for pod "kube-proxy-9ktm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:50.208309 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:50.708288 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:51.208500 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:51.708453 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:52.208399 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:52.305731 1183602 kubeadm.go:1113] duration metric: took 3.830801358s to wait for elevateKubeSystemPrivileges
	I1002 21:52:52.305759 1183602 kubeadm.go:402] duration metric: took 20.717380405s to StartCluster
	I1002 21:52:52.305776 1183602 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:52.305835 1183602 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:52:52.306902 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:52.307124 1183602 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:52:52.307204 1183602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 21:52:52.307434 1183602 config.go:182] Loaded profile config "no-preload-661954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:52:52.307460 1183602 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:52:52.307546 1183602 addons.go:69] Setting storage-provisioner=true in profile "no-preload-661954"
	I1002 21:52:52.307553 1183602 addons.go:69] Setting default-storageclass=true in profile "no-preload-661954"
	I1002 21:52:52.307561 1183602 addons.go:238] Setting addon storage-provisioner=true in "no-preload-661954"
	I1002 21:52:52.307568 1183602 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-661954"
	I1002 21:52:52.307588 1183602 host.go:66] Checking if "no-preload-661954" exists ...
	I1002 21:52:52.307920 1183602 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:52:52.308077 1183602 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:52:52.311070 1183602 out.go:179] * Verifying Kubernetes components...
	I1002 21:52:52.314169 1183602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:52:52.364059 1183602 addons.go:238] Setting addon default-storageclass=true in "no-preload-661954"
	I1002 21:52:52.364099 1183602 host.go:66] Checking if "no-preload-661954" exists ...
	I1002 21:52:52.364514 1183602 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:52:52.371867 1183602 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:52.111920 1182747 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:52.511930 1182747 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-714101" is "Ready"
	I1002 21:52:52.511960 1182747 pod_ready.go:86] duration metric: took 400.013911ms for pod "kube-scheduler-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:52.511973 1182747 pod_ready.go:40] duration metric: took 35.914713644s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:52:52.615971 1182747 start.go:627] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1002 21:52:52.618896 1182747 out.go:203] 
	W1002 21:52:52.621735 1182747 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1002 21:52:52.624508 1182747 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1002 21:52:52.627525 1182747 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-714101" cluster and "default" namespace by default
	I1002 21:52:52.374931 1183602 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:52:52.374960 1183602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:52:52.375025 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:52.405200 1183602 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:52:52.405219 1183602 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:52:52.405282 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:52.421776 1183602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:52:52.447876 1183602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:52:52.748782 1183602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:52:52.748836 1183602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 21:52:52.863097 1183602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:52:52.991633 1183602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:52:53.731441 1183602 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1002 21:52:53.733422 1183602 node_ready.go:35] waiting up to 6m0s for node "no-preload-661954" to be "Ready" ...
	I1002 21:52:54.045904 1183602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.054235822s)
	I1002 21:52:54.046608 1183602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.183476615s)
	I1002 21:52:54.076724 1183602 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 21:52:54.079821 1183602 addons.go:514] duration metric: took 1.772351703s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 21:52:54.237862 1183602 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-661954" context rescaled to 1 replicas
	W1002 21:52:55.737074 1183602 node_ready.go:57] node "no-preload-661954" has "Ready":"False" status (will retry)
	W1002 21:52:57.738702 1183602 node_ready.go:57] node "no-preload-661954" has "Ready":"False" status (will retry)
	W1002 21:53:00.248307 1183602 node_ready.go:57] node "no-preload-661954" has "Ready":"False" status (will retry)
	W1002 21:53:02.736781 1183602 node_ready.go:57] node "no-preload-661954" has "Ready":"False" status (will retry)
	W1002 21:53:04.736884 1183602 node_ready.go:57] node "no-preload-661954" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.977955847Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.981516492Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.981669276Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.981743095Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.985720295Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.985873366Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.985947194Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.991887845Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.99195556Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.991980093Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.998312324Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.998480787Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.46608932Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=47cc7af1-dbb3-4a82-bf50-b4f35103e2fb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.467067526Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=27343265-0362-4b19-a54c-6de2fa2781ad name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.468225781Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl/dashboard-metrics-scraper" id=a3fd7157-d11b-408f-b96b-81a3d81c2cd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.468457775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.477249699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.477899085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.495145498Z" level=info msg="Created container 1b693f68ee8dd6e2309e37f4197f27094d4dfaa1d9c6f3ba3d22a7358a840ba1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl/dashboard-metrics-scraper" id=a3fd7157-d11b-408f-b96b-81a3d81c2cd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.495914897Z" level=info msg="Starting container: 1b693f68ee8dd6e2309e37f4197f27094d4dfaa1d9c6f3ba3d22a7358a840ba1" id=95c00582-4d99-421c-b024-ab40f2a081bf name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.497391806Z" level=info msg="Started container" PID=1705 containerID=1b693f68ee8dd6e2309e37f4197f27094d4dfaa1d9c6f3ba3d22a7358a840ba1 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl/dashboard-metrics-scraper id=95c00582-4d99-421c-b024-ab40f2a081bf name=/runtime.v1.RuntimeService/StartContainer sandboxID=f438f6485c9e8b0d2fae63954c0e24eb4ebac02811c2380e16dc455846c1c698
	Oct 02 21:52:57 old-k8s-version-714101 conmon[1703]: conmon 1b693f68ee8dd6e2309e <ninfo>: container 1705 exited with status 1
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.740030413Z" level=info msg="Removing container: 1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c" id=70527afd-cfbe-4eff-824e-abedb0c4f1e3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.754621872Z" level=info msg="Error loading conmon cgroup of container 1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c: cgroup deleted" id=70527afd-cfbe-4eff-824e-abedb0c4f1e3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.761056706Z" level=info msg="Removed container 1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl/dashboard-metrics-scraper" id=70527afd-cfbe-4eff-824e-abedb0c4f1e3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	1b693f68ee8dd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   2                   f438f6485c9e8       dashboard-metrics-scraper-5f989dc9cf-b8gtl       kubernetes-dashboard
	dd0b9cee0f7e3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago       Running             storage-provisioner         2                   e8d95ce22e87e       storage-provisioner                              kube-system
	a877467241334       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago       Running             kubernetes-dashboard        0                   baf83198bfb1e       kubernetes-dashboard-8694d4445c-m6s5z            kubernetes-dashboard
	5df3b4c4cd17f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           54 seconds ago       Running             coredns                     1                   bef6ae50ff6f1       coredns-5dd5756b68-f7qdk                         kube-system
	4cc075aa1f2df       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   a53de98d142aa       busybox                                          default
	6ede589ba5dbe       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   51c5ec2ff84df       kindnet-qgs2b                                    kube-system
	b46c6d49eaea1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   e8d95ce22e87e       storage-provisioner                              kube-system
	9af8e58137628       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           55 seconds ago       Running             kube-proxy                  1                   32a6007e37f89       kube-proxy-9ktm4                                 kube-system
	5a83b3b5fdd18       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   41bd58323735e       kube-apiserver-old-k8s-version-714101            kube-system
	c3aedcafe119f       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   f2b51ae40d874       kube-scheduler-old-k8s-version-714101            kube-system
	3c684fbd5a7c3       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   bf2a8bfaa7b38       kube-controller-manager-old-k8s-version-714101   kube-system
	d36efcb47e31c       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   14046e7067387       etcd-old-k8s-version-714101                      kube-system
	
	
	==> coredns [5df3b4c4cd17f23a02364ab3315805996102ae0b9cc8eaa6c40c3633d6493a30] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41909 - 57219 "HINFO IN 7321415675562643261.1914517515117912143. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013445766s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-714101
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-714101
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=old-k8s-version-714101
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_51_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:50:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-714101
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:53:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:52:43 +0000   Thu, 02 Oct 2025 21:50:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:52:43 +0000   Thu, 02 Oct 2025 21:50:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:52:43 +0000   Thu, 02 Oct 2025 21:50:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:52:43 +0000   Thu, 02 Oct 2025 21:51:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-714101
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 67eee4e12caa4c1d823624a8d719cd18
	  System UUID:                fd388e5a-8f2f-4643-a470-d71d3d179fee
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-5dd5756b68-f7qdk                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-old-k8s-version-714101                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-qgs2b                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-old-k8s-version-714101             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-old-k8s-version-714101    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-9ktm4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-old-k8s-version-714101             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-b8gtl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-m6s5z             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  Starting                 52s                    kube-proxy       
	  Normal  Starting                 2m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-714101 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node old-k8s-version-714101 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s                   node-controller  Node old-k8s-version-714101 event: Registered Node old-k8s-version-714101 in Controller
	  Normal  NodeReady                99s                    kubelet          Node old-k8s-version-714101 status is now: NodeReady
	  Normal  Starting                 63s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node old-k8s-version-714101 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                    node-controller  Node old-k8s-version-714101 event: Registered Node old-k8s-version-714101 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:15] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:18] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:21] overlayfs: idmapped layers are currently not supported
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +39.734560] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[  +6.128952] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d36efcb47e31c46319889a12e9d7ca6fbd109d48b8e4129bed48fb9a558f3d81] <==
	{"level":"info","ts":"2025-10-02T21:52:06.724316Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T21:52:06.724336Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T21:52:06.724526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-02T21:52:06.724582Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-02T21:52:06.724646Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T21:52:06.72467Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T21:52:06.733464Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-02T21:52:06.7335Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-02T21:52:06.726022Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-02T21:52:06.733681Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-02T21:52:06.733701Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-02T21:52:08.020834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-02T21:52:08.020962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-02T21:52:08.021016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-02T21:52:08.02107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-02T21:52:08.021103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-02T21:52:08.021158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-02T21:52:08.021196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-02T21:52:08.027364Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-714101 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-02T21:52:08.02758Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T21:52:08.028283Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-02T21:52:08.028386Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-02T21:52:08.028436Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T21:52:08.029364Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-02T21:52:08.066915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:53:08 up  6:35,  0 user,  load average: 4.44, 2.37, 1.75
	Linux old-k8s-version-714101 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ede589ba5dbe96da518cad779870f082e644a9d501eb5455c06ab59e4fb8bd8] <==
	I1002 21:52:13.757317       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:52:13.758307       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 21:52:13.758438       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:52:13.758449       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:52:13.758462       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:52:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:52:13.967177       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:52:14.011475       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:52:14.011585       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:52:14.011771       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:52:43.968872       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:52:43.970509       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 21:52:43.970716       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 21:52:43.983166       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 21:52:45.612613       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:52:45.612657       1 metrics.go:72] Registering metrics
	I1002 21:52:45.612711       1 controller.go:711] "Syncing nftables rules"
	I1002 21:52:53.967461       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:52:53.968565       1 main.go:301] handling current node
	I1002 21:53:03.972153       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:53:03.972187       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5a83b3b5fdd1829410dd940329aebf65d78fdcbe04a70857ada30dd577e810fe] <==
	I1002 21:52:12.592749       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1002 21:52:12.595995       1 aggregator.go:166] initial CRD sync complete...
	I1002 21:52:12.596051       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 21:52:12.596080       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 21:52:12.596113       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:52:12.596328       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 21:52:12.597168       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 21:52:12.657054       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:52:12.695676       1 trace.go:236] Trace[1711679665]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:50fc29fa-fba4-4c93-b619-e3ec5b5aae13,client:192.168.76.2,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (02-Oct-2025 21:52:12.090) (total time: 605ms):
	Trace[1711679665]: ---"Write to database call failed" len:4139,err:nodes "old-k8s-version-714101" already exists 104ms (21:52:12.695)
	Trace[1711679665]: [605.564491ms] [605.564491ms] END
	I1002 21:52:12.729571       1 trace.go:236] Trace[532147769]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:6b91340a-bf34-46eb-81a3-b6dfb97715a8,client:192.168.76.2,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (02-Oct-2025 21:52:12.090) (total time: 638ms):
	Trace[532147769]: [638.660085ms] [638.660085ms] END
	E1002 21:52:12.916611       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 21:52:13.043549       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:52:16.283178       1 controller.go:624] quota admission added evaluator for: namespaces
	I1002 21:52:16.369980       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 21:52:16.409597       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:52:16.421067       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:52:16.434982       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 21:52:16.487984       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.182.0"}
	I1002 21:52:16.508512       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.16.127"}
	I1002 21:52:26.689769       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:52:26.694357       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1002 21:52:27.059903       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3c684fbd5a7c318f98ee71b7292da5395690fd5e263ff9214c18a7732d8adeeb] <==
	I1002 21:52:26.755205       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 21:52:26.799653       1 shared_informer.go:318] Caches are synced for disruption
	I1002 21:52:26.827330       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="140.752249ms"
	I1002 21:52:26.827781       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.431µs"
	I1002 21:52:26.847155       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-b8gtl"
	I1002 21:52:26.847257       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-m6s5z"
	I1002 21:52:26.870801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="157.64732ms"
	I1002 21:52:26.883122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="160.44092ms"
	I1002 21:52:26.892260       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="21.316959ms"
	I1002 21:52:26.893178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="38.038µs"
	I1002 21:52:26.966085       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="82.84844ms"
	I1002 21:52:26.977324       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.964µs"
	I1002 21:52:27.026145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.943895ms"
	I1002 21:52:27.026315       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="65.516µs"
	I1002 21:52:27.165601       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 21:52:27.165699       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1002 21:52:27.211921       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 21:52:34.737153       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="31.889949ms"
	I1002 21:52:34.737271       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.912µs"
	I1002 21:52:40.716877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.57µs"
	I1002 21:52:41.723626       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.958µs"
	I1002 21:52:42.718737       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.292µs"
	I1002 21:52:51.008297       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.54547ms"
	I1002 21:52:51.009882       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.039µs"
	I1002 21:52:57.762483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.136µs"
	
	
	==> kube-proxy [9af8e581376283afb448db698b74533910e6b9797b8fec4999d5b566a07a942a] <==
	I1002 21:52:15.054412       1 server_others.go:69] "Using iptables proxy"
	I1002 21:52:15.145263       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1002 21:52:15.950629       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:52:16.070341       1 server_others.go:152] "Using iptables Proxier"
	I1002 21:52:16.070388       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 21:52:16.070397       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 21:52:16.070426       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 21:52:16.070649       1 server.go:846] "Version info" version="v1.28.0"
	I1002 21:52:16.070667       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:52:16.075550       1 config.go:188] "Starting service config controller"
	I1002 21:52:16.075579       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 21:52:16.075603       1 config.go:97] "Starting endpoint slice config controller"
	I1002 21:52:16.075607       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 21:52:16.084512       1 config.go:315] "Starting node config controller"
	I1002 21:52:16.084541       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 21:52:16.177174       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 21:52:16.177223       1 shared_informer.go:318] Caches are synced for service config
	I1002 21:52:16.186748       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c3aedcafe119fc8bb499f43ec599913cb04664925ad686de580218aff808d473] <==
	I1002 21:52:09.791079       1 serving.go:348] Generated self-signed cert in-memory
	I1002 21:52:15.925260       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1002 21:52:15.925294       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:52:15.936226       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 21:52:15.936431       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1002 21:52:15.936449       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1002 21:52:15.936463       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 21:52:15.979364       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:52:15.979410       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 21:52:15.986430       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:52:15.986465       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 21:52:16.057203       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1002 21:52:16.092038       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 21:52:16.094274       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 21:52:20 old-k8s-version-714101 kubelet[776]: I1002 21:52:20.972215     776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 21:52:26 old-k8s-version-714101 kubelet[776]: I1002 21:52:26.861315     776 topology_manager.go:215] "Topology Admit Handler" podUID="f3377233-589f-43c3-8135-33c09c2b7651" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-m6s5z"
	Oct 02 21:52:26 old-k8s-version-714101 kubelet[776]: I1002 21:52:26.880061     776 topology_manager.go:215] "Topology Admit Handler" podUID="34ef4770-262b-49f2-848d-505bea074a2b" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-b8gtl"
	Oct 02 21:52:26 old-k8s-version-714101 kubelet[776]: I1002 21:52:26.921488     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f3377233-589f-43c3-8135-33c09c2b7651-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-m6s5z\" (UID: \"f3377233-589f-43c3-8135-33c09c2b7651\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-m6s5z"
	Oct 02 21:52:26 old-k8s-version-714101 kubelet[776]: I1002 21:52:26.921563     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktzln\" (UniqueName: \"kubernetes.io/projected/f3377233-589f-43c3-8135-33c09c2b7651-kube-api-access-ktzln\") pod \"kubernetes-dashboard-8694d4445c-m6s5z\" (UID: \"f3377233-589f-43c3-8135-33c09c2b7651\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-m6s5z"
	Oct 02 21:52:27 old-k8s-version-714101 kubelet[776]: I1002 21:52:27.022672     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/34ef4770-262b-49f2-848d-505bea074a2b-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-b8gtl\" (UID: \"34ef4770-262b-49f2-848d-505bea074a2b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl"
	Oct 02 21:52:27 old-k8s-version-714101 kubelet[776]: I1002 21:52:27.022744     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-557fj\" (UniqueName: \"kubernetes.io/projected/34ef4770-262b-49f2-848d-505bea074a2b-kube-api-access-557fj\") pod \"dashboard-metrics-scraper-5f989dc9cf-b8gtl\" (UID: \"34ef4770-262b-49f2-848d-505bea074a2b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl"
	Oct 02 21:52:27 old-k8s-version-714101 kubelet[776]: W1002 21:52:27.246361     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/crio-f438f6485c9e8b0d2fae63954c0e24eb4ebac02811c2380e16dc455846c1c698 WatchSource:0}: Error finding container f438f6485c9e8b0d2fae63954c0e24eb4ebac02811c2380e16dc455846c1c698: Status 404 returned error can't find the container with id f438f6485c9e8b0d2fae63954c0e24eb4ebac02811c2380e16dc455846c1c698
	Oct 02 21:52:40 old-k8s-version-714101 kubelet[776]: I1002 21:52:40.688066     776 scope.go:117] "RemoveContainer" containerID="82981fca6afbb9af1675da32301aa1dd945ad912d451ce0363a73ca0b4587bae"
	Oct 02 21:52:40 old-k8s-version-714101 kubelet[776]: I1002 21:52:40.714223     776 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-m6s5z" podStartSLOduration=7.825599074 podCreationTimestamp="2025-10-02 21:52:26 +0000 UTC" firstStartedPulling="2025-10-02 21:52:27.255143458 +0000 UTC m=+21.968091278" lastFinishedPulling="2025-10-02 21:52:34.143012397 +0000 UTC m=+28.855960225" observedRunningTime="2025-10-02 21:52:34.703135599 +0000 UTC m=+29.416083427" watchObservedRunningTime="2025-10-02 21:52:40.713468021 +0000 UTC m=+35.426415849"
	Oct 02 21:52:41 old-k8s-version-714101 kubelet[776]: I1002 21:52:41.692067     776 scope.go:117] "RemoveContainer" containerID="1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c"
	Oct 02 21:52:41 old-k8s-version-714101 kubelet[776]: I1002 21:52:41.693175     776 scope.go:117] "RemoveContainer" containerID="82981fca6afbb9af1675da32301aa1dd945ad912d451ce0363a73ca0b4587bae"
	Oct 02 21:52:41 old-k8s-version-714101 kubelet[776]: E1002 21:52:41.706722     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8gtl_kubernetes-dashboard(34ef4770-262b-49f2-848d-505bea074a2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl" podUID="34ef4770-262b-49f2-848d-505bea074a2b"
	Oct 02 21:52:42 old-k8s-version-714101 kubelet[776]: I1002 21:52:42.695855     776 scope.go:117] "RemoveContainer" containerID="1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c"
	Oct 02 21:52:42 old-k8s-version-714101 kubelet[776]: E1002 21:52:42.696587     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8gtl_kubernetes-dashboard(34ef4770-262b-49f2-848d-505bea074a2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl" podUID="34ef4770-262b-49f2-848d-505bea074a2b"
	Oct 02 21:52:45 old-k8s-version-714101 kubelet[776]: I1002 21:52:45.703675     776 scope.go:117] "RemoveContainer" containerID="b46c6d49eaea1a24cb1c3f40313daa77b388cfaf594a9dca66a287e6673de560"
	Oct 02 21:52:47 old-k8s-version-714101 kubelet[776]: I1002 21:52:47.183498     776 scope.go:117] "RemoveContainer" containerID="1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c"
	Oct 02 21:52:47 old-k8s-version-714101 kubelet[776]: E1002 21:52:47.183865     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8gtl_kubernetes-dashboard(34ef4770-262b-49f2-848d-505bea074a2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl" podUID="34ef4770-262b-49f2-848d-505bea074a2b"
	Oct 02 21:52:57 old-k8s-version-714101 kubelet[776]: I1002 21:52:57.465020     776 scope.go:117] "RemoveContainer" containerID="1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c"
	Oct 02 21:52:57 old-k8s-version-714101 kubelet[776]: I1002 21:52:57.737248     776 scope.go:117] "RemoveContainer" containerID="1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c"
	Oct 02 21:52:57 old-k8s-version-714101 kubelet[776]: I1002 21:52:57.737716     776 scope.go:117] "RemoveContainer" containerID="1b693f68ee8dd6e2309e37f4197f27094d4dfaa1d9c6f3ba3d22a7358a840ba1"
	Oct 02 21:52:57 old-k8s-version-714101 kubelet[776]: E1002 21:52:57.738171     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8gtl_kubernetes-dashboard(34ef4770-262b-49f2-848d-505bea074a2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl" podUID="34ef4770-262b-49f2-848d-505bea074a2b"
	Oct 02 21:53:05 old-k8s-version-714101 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 21:53:05 old-k8s-version-714101 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 21:53:05 old-k8s-version-714101 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a87746724133411149d8880820420b63e43012165b82087926e62ccf4703f7e5] <==
	2025/10/02 21:52:34 Using namespace: kubernetes-dashboard
	2025/10/02 21:52:34 Using in-cluster config to connect to apiserver
	2025/10/02 21:52:34 Using secret token for csrf signing
	2025/10/02 21:52:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 21:52:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 21:52:34 Successful initial request to the apiserver, version: v1.28.0
	2025/10/02 21:52:34 Generating JWE encryption key
	2025/10/02 21:52:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 21:52:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 21:52:35 Initializing JWE encryption key from synchronized object
	2025/10/02 21:52:35 Creating in-cluster Sidecar client
	2025/10/02 21:52:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 21:52:35 Serving insecurely on HTTP port: 9090
	2025/10/02 21:53:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 21:52:34 Starting overwatch
	
	
	==> storage-provisioner [b46c6d49eaea1a24cb1c3f40313daa77b388cfaf594a9dca66a287e6673de560] <==
	I1002 21:52:14.722934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 21:52:44.724967       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [dd0b9cee0f7e3c89c78675db4423007b357320668c81cbc1fdc6e2b580b171de] <==
	I1002 21:52:45.781591       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 21:52:45.802713       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 21:52:45.802869       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 21:53:03.206742       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:53:03.206876       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89d3a326-7957-4e4b-8a32-0337fd7fbaa5", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-714101_0fe09646-4bb0-4e8b-8df2-8be4624cef47 became leader
	I1002 21:53:03.207579       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-714101_0fe09646-4bb0-4e8b-8df2-8be4624cef47!
	I1002 21:53:03.308258       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-714101_0fe09646-4bb0-4e8b-8df2-8be4624cef47!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-714101 -n old-k8s-version-714101
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-714101 -n old-k8s-version-714101: exit status 2 (390.074212ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-714101 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-714101
helpers_test.go:243: (dbg) docker inspect old-k8s-version-714101:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67",
	        "Created": "2025-10-02T21:50:34.734644622Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182936,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:51:57.408781147Z",
	            "FinishedAt": "2025-10-02T21:51:56.31189149Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/hostname",
	        "HostsPath": "/var/lib/docker/containers/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/hosts",
	        "LogPath": "/var/lib/docker/containers/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67-json.log",
	        "Name": "/old-k8s-version-714101",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-714101:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-714101",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67",
	                "LowerDir": "/var/lib/docker/overlay2/20a52751eb46976a24fc5abf4becbbbbb7c2efef8e12481f2765ae857910ffd7-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20a52751eb46976a24fc5abf4becbbbbb7c2efef8e12481f2765ae857910ffd7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20a52751eb46976a24fc5abf4becbbbbb7c2efef8e12481f2765ae857910ffd7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20a52751eb46976a24fc5abf4becbbbbb7c2efef8e12481f2765ae857910ffd7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-714101",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-714101/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-714101",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-714101",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-714101",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a46b01e6b24191a2bd3da230c8c9b6b8d1a8e8a6bdf8d7c761cec0d6a056e273",
	            "SandboxKey": "/var/run/docker/netns/a46b01e6b241",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34186"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34187"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34190"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34188"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34189"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-714101": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:be:28:97:4c:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ed3dcf8a95545fb7e9009343422d8cf7e7334b26a46fbfef0ce71c0f5ff11be4",
	                    "EndpointID": "f0cea423ccacf08c73233bc4db84daf085f044c92aa2dcea421c3c4c57ad518b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-714101",
	                        "e7b0b66ac30c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-714101 -n old-k8s-version-714101
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-714101 -n old-k8s-version-714101: exit status 2 (360.031067ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-714101 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-714101 logs -n 25: (1.48861569s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-644857 sudo containerd config dump                                                                                                                                                                                                  │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ -p cilium-644857 sudo crio config                                                                                                                                                                                                             │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ delete  │ -p cilium-644857                                                                                                                                                                                                                              │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │ 02 Oct 25 21:41 UTC │
	│ start   │ -p force-systemd-env-916563 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-916563  │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ force-systemd-flag-987043 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-987043 │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	│ delete  │ -p force-systemd-flag-987043                                                                                                                                                                                                                  │ force-systemd-flag-987043 │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	│ start   │ -p cert-expiration-955864 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-955864    │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	│ delete  │ -p force-systemd-env-916563                                                                                                                                                                                                                   │ force-systemd-env-916563  │ jenkins │ v1.37.0 │ 02 Oct 25 21:49 UTC │ 02 Oct 25 21:49 UTC │
	│ start   │ -p cert-options-769461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:49 UTC │ 02 Oct 25 21:50 UTC │
	│ ssh     │ cert-options-769461 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ ssh     │ -p cert-options-769461 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ delete  │ -p cert-options-769461                                                                                                                                                                                                                        │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ start   │ -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p cert-expiration-955864 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-955864    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-714101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │                     │
	│ stop    │ -p old-k8s-version-714101 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-714101 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:52 UTC │
	│ delete  │ -p cert-expiration-955864                                                                                                                                                                                                                     │ cert-expiration-955864    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-661954         │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │                     │
	│ image   │ old-k8s-version-714101 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ pause   │ -p old-k8s-version-714101 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:51:59
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:51:59.930668 1183602 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:51:59.930838 1183602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:51:59.930870 1183602 out.go:374] Setting ErrFile to fd 2...
	I1002 21:51:59.930890 1183602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:51:59.931179 1183602 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:51:59.931663 1183602 out.go:368] Setting JSON to false
	I1002 21:51:59.932576 1183602 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23657,"bootTime":1759418263,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:51:59.932671 1183602 start.go:140] virtualization:  
	I1002 21:51:59.935808 1183602 out.go:179] * [no-preload-661954] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:51:59.940166 1183602 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:51:59.940279 1183602 notify.go:221] Checking for updates...
	I1002 21:51:59.946446 1183602 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:51:59.949583 1183602 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:51:59.952538 1183602 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:51:59.955520 1183602 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:51:59.958481 1183602 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:51:59.962168 1183602 config.go:182] Loaded profile config "old-k8s-version-714101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 21:51:59.962274 1183602 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:51:59.987677 1183602 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:51:59.987797 1183602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:52:00.177516 1183602 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:52:00.140684711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:52:00.177635 1183602 docker.go:319] overlay module found
	I1002 21:52:00.180998 1183602 out.go:179] * Using the docker driver based on user configuration
	I1002 21:52:00.184243 1183602 start.go:306] selected driver: docker
	I1002 21:52:00.184276 1183602 start.go:936] validating driver "docker" against <nil>
	I1002 21:52:00.184290 1183602 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:52:00.185135 1183602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:52:00.367141 1183602 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:52:00.338128369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:52:00.367340 1183602 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:52:00.367636 1183602 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:52:00.371530 1183602 out.go:179] * Using Docker driver with root privileges
	I1002 21:52:00.376319 1183602 cni.go:84] Creating CNI manager for ""
	I1002 21:52:00.376416 1183602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:52:00.376427 1183602 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:52:00.376515 1183602 start.go:350] cluster config:
	{Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:52:00.383208 1183602 out.go:179] * Starting "no-preload-661954" primary control-plane node in "no-preload-661954" cluster
	I1002 21:52:00.386221 1183602 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:52:00.389486 1183602 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:52:00.393677 1183602 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:52:00.393704 1183602 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:52:00.393892 1183602 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/config.json ...
	I1002 21:52:00.393938 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/config.json: {Name:mkbdb847e5e448aec408b7974fc06806dcf744ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:00.394479 1183602 cache.go:107] acquiring lock: {Name:mk77546a797d48dfa87e4f15444ebfe2ae46de0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.394523 1183602 cache.go:107] acquiring lock: {Name:mkb30203224ed1c1a4b88d93d3aeb9a29d46fb20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.394560 1183602 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 21:52:00.394571 1183602 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.86µs
	I1002 21:52:00.394584 1183602 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 21:52:00.394600 1183602 cache.go:107] acquiring lock: {Name:mk17c8111e11ff4babf675464dda89dffef8dccd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.394625 1183602 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:00.394723 1183602 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1002 21:52:00.394867 1183602 cache.go:107] acquiring lock: {Name:mk232b04a28dc0f5922a8e36bb60d83a371a69dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.394875 1183602 cache.go:107] acquiring lock: {Name:mk2aab2e3052911889ff3d13b07414606ffa2c8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.394952 1183602 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:00.394977 1183602 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:00.395051 1183602 cache.go:107] acquiring lock: {Name:mkb9b4c6e229a9543f9236d679c4b53878bc9ce5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.395088 1183602 cache.go:107] acquiring lock: {Name:mkb1bbde6510d7fb66d3923ec81dcf1545e1aeca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.395141 1183602 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:00.395171 1183602 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:00.395272 1183602 cache.go:107] acquiring lock: {Name:mk783e98a1246826a6f16b0bd25f720d93184154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.395350 1183602 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:00.397362 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:00.398386 1183602 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:00.398557 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:00.398788 1183602 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:00.398935 1183602 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1002 21:52:00.398951 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:00.399262 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:00.458235 1183602 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:52:00.458262 1183602 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:52:00.458277 1183602 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:52:00.458326 1183602 start.go:361] acquireMachinesLock for no-preload-661954: {Name:mk6a385b42202eaf12d2e98c4a7f7a9c153c60e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:52:00.458453 1183602 start.go:365] duration metric: took 106.295µs to acquireMachinesLock for "no-preload-661954"
	I1002 21:52:00.458483 1183602 start.go:94] Provisioning new machine with config: &{Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:52:00.458553 1183602 start.go:126] createHost starting for "" (driver="docker")
	I1002 21:51:57.377044 1182747 out.go:252] * Restarting existing docker container for "old-k8s-version-714101" ...
	I1002 21:51:57.377126 1182747 cli_runner.go:164] Run: docker start old-k8s-version-714101
	I1002 21:51:57.737528 1182747 cli_runner.go:164] Run: docker container inspect old-k8s-version-714101 --format={{.State.Status}}
	I1002 21:51:57.760429 1182747 kic.go:430] container "old-k8s-version-714101" state is running.
	I1002 21:51:57.760783 1182747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-714101
	I1002 21:51:57.783976 1182747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/config.json ...
	I1002 21:51:57.784213 1182747 machine.go:93] provisionDockerMachine start ...
	I1002 21:51:57.784277 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:51:57.821820 1182747 main.go:141] libmachine: Using SSH client type: native
	I1002 21:51:57.822239 1182747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34186 <nil> <nil>}
	I1002 21:51:57.822251 1182747 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:51:57.822897 1182747 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 21:52:01.007081 1182747 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-714101
	
	I1002 21:52:01.007113 1182747 ubuntu.go:182] provisioning hostname "old-k8s-version-714101"
	I1002 21:52:01.007184 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:01.044199 1182747 main.go:141] libmachine: Using SSH client type: native
	I1002 21:52:01.044495 1182747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34186 <nil> <nil>}
	I1002 21:52:01.044506 1182747 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-714101 && echo "old-k8s-version-714101" | sudo tee /etc/hostname
	I1002 21:52:01.279327 1182747 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-714101
	
	I1002 21:52:01.279412 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:01.306593 1182747 main.go:141] libmachine: Using SSH client type: native
	I1002 21:52:01.306905 1182747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34186 <nil> <nil>}
	I1002 21:52:01.306928 1182747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-714101' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-714101/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-714101' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:52:01.465821 1182747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:52:01.465849 1182747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:52:01.465868 1182747 ubuntu.go:190] setting up certificates
	I1002 21:52:01.465876 1182747 provision.go:84] configureAuth start
	I1002 21:52:01.465937 1182747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-714101
	I1002 21:52:01.517590 1182747 provision.go:143] copyHostCerts
	I1002 21:52:01.517658 1182747 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:52:01.517702 1182747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:52:01.517773 1182747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:52:01.517888 1182747 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:52:01.517901 1182747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:52:01.517931 1182747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:52:01.517989 1182747 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:52:01.517999 1182747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:52:01.518028 1182747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:52:01.518114 1182747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-714101 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-714101]
	I1002 21:52:02.117854 1182747 provision.go:177] copyRemoteCerts
	I1002 21:52:02.131285 1182747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:52:02.131421 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:02.221107 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:02.414379 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:52:02.457114 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:52:02.499247 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 21:52:02.534308 1182747 provision.go:87] duration metric: took 1.06841828s to configureAuth
	I1002 21:52:02.534336 1182747 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:52:02.534525 1182747 config.go:182] Loaded profile config "old-k8s-version-714101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 21:52:02.534703 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:02.593415 1182747 main.go:141] libmachine: Using SSH client type: native
	I1002 21:52:02.593719 1182747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34186 <nil> <nil>}
	I1002 21:52:02.593733 1182747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:52:03.263384 1182747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:52:03.263407 1182747 machine.go:96] duration metric: took 5.479185025s to provisionDockerMachine
	I1002 21:52:03.263418 1182747 start.go:294] postStartSetup for "old-k8s-version-714101" (driver="docker")
	I1002 21:52:03.263428 1182747 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:52:03.263500 1182747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:52:03.263556 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:03.289628 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:03.396518 1182747 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:52:03.400969 1182747 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:52:03.400995 1182747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:52:03.401021 1182747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:52:03.401083 1182747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:52:03.401171 1182747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:52:03.401278 1182747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:52:03.411693 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:52:03.435368 1182747 start.go:297] duration metric: took 171.935031ms for postStartSetup
	I1002 21:52:03.435483 1182747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:52:03.435551 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:03.463154 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:03.567278 1182747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:52:03.572577 1182747 fix.go:57] duration metric: took 6.220622496s for fixHost
	I1002 21:52:03.572605 1182747 start.go:84] releasing machines lock for "old-k8s-version-714101", held for 6.220674786s
	I1002 21:52:03.572686 1182747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-714101
	I1002 21:52:03.595159 1182747 ssh_runner.go:195] Run: cat /version.json
	I1002 21:52:03.595211 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:03.595428 1182747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:52:03.595489 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:03.631786 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:03.633164 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:03.726146 1182747 ssh_runner.go:195] Run: systemctl --version
	I1002 21:52:03.831847 1182747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:52:03.868822 1182747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:52:03.873888 1182747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:52:03.873959 1182747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:52:03.881419 1182747 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:52:03.881443 1182747 start.go:496] detecting cgroup driver to use...
	I1002 21:52:03.881474 1182747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:52:03.881522 1182747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:52:03.896854 1182747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:52:03.909804 1182747 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:52:03.909894 1182747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:52:03.925281 1182747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:52:03.938545 1182747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:52:04.058668 1182747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:52:04.192039 1182747 docker.go:234] disabling docker service ...
	I1002 21:52:04.192122 1182747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:52:04.209778 1182747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:52:04.223333 1182747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:52:04.339179 1182747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:52:04.448278 1182747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:52:04.461370 1182747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:52:04.475518 1182747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 21:52:04.475631 1182747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:04.484536 1182747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:52:04.484654 1182747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:04.496911 1182747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:04.506896 1182747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:04.515812 1182747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:52:04.524060 1182747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:04.533435 1182747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:04.542077 1182747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:04.550918 1182747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:52:04.558353 1182747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:52:04.565961 1182747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:52:04.681640 1182747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:52:04.808046 1182747 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:52:04.808163 1182747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:52:04.812043 1182747 start.go:564] Will wait 60s for crictl version
	I1002 21:52:04.812150 1182747 ssh_runner.go:195] Run: which crictl
	I1002 21:52:04.815826 1182747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:52:04.840576 1182747 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:52:04.840698 1182747 ssh_runner.go:195] Run: crio --version
	I1002 21:52:04.872175 1182747 ssh_runner.go:195] Run: crio --version
	I1002 21:52:04.905250 1182747 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1002 21:52:00.471333 1183602 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:52:00.471778 1183602 start.go:160] libmachine.API.Create for "no-preload-661954" (driver="docker")
	I1002 21:52:00.471857 1183602 client.go:168] LocalClient.Create starting
	I1002 21:52:00.471943 1183602 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem
	I1002 21:52:00.471987 1183602 main.go:141] libmachine: Decoding PEM data...
	I1002 21:52:00.472001 1183602 main.go:141] libmachine: Parsing certificate...
	I1002 21:52:00.472071 1183602 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem
	I1002 21:52:00.472099 1183602 main.go:141] libmachine: Decoding PEM data...
	I1002 21:52:00.472109 1183602 main.go:141] libmachine: Parsing certificate...
	I1002 21:52:00.472588 1183602 cli_runner.go:164] Run: docker network inspect no-preload-661954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:52:00.505631 1183602 cli_runner.go:211] docker network inspect no-preload-661954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:52:00.505728 1183602 network_create.go:284] running [docker network inspect no-preload-661954] to gather additional debugging logs...
	I1002 21:52:00.505747 1183602 cli_runner.go:164] Run: docker network inspect no-preload-661954
	W1002 21:52:00.551515 1183602 cli_runner.go:211] docker network inspect no-preload-661954 returned with exit code 1
	I1002 21:52:00.551569 1183602 network_create.go:287] error running [docker network inspect no-preload-661954]: docker network inspect no-preload-661954: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-661954 not found
	I1002 21:52:00.551584 1183602 network_create.go:289] output of [docker network inspect no-preload-661954]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-661954 not found
	
	** /stderr **
	I1002 21:52:00.551690 1183602 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:52:00.574029 1183602 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c06a83b4618b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:65:5b:88:54:7f} reservation:<nil>}
	I1002 21:52:00.575178 1183602 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-478a83a9ba8a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:87:10:56:a0:1c} reservation:<nil>}
	I1002 21:52:00.575564 1183602 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb579a1208f3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:a8:08:2c:81:8c} reservation:<nil>}
	I1002 21:52:00.575980 1183602 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ed3dcf8a9554 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:be:30:19:34:2b:70} reservation:<nil>}
	I1002 21:52:00.577289 1183602 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001e4c000}
	I1002 21:52:00.577369 1183602 network_create.go:124] attempt to create docker network no-preload-661954 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 21:52:00.577539 1183602 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-661954 no-preload-661954
	I1002 21:52:00.648734 1183602 network_create.go:108] docker network no-preload-661954 192.168.85.0/24 created
	I1002 21:52:00.648773 1183602 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-661954" container
	I1002 21:52:00.648853 1183602 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:52:00.667103 1183602 cli_runner.go:164] Run: docker volume create no-preload-661954 --label name.minikube.sigs.k8s.io=no-preload-661954 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:52:00.688191 1183602 oci.go:103] Successfully created a docker volume no-preload-661954
	I1002 21:52:00.688281 1183602 cli_runner.go:164] Run: docker run --rm --name no-preload-661954-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-661954 --entrypoint /usr/bin/test -v no-preload-661954:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:52:00.757472 1183602 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1002 21:52:00.757568 1183602 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1002 21:52:00.783533 1183602 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1002 21:52:00.832100 1183602 cache.go:157] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1002 21:52:00.832125 1183602 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 437.528545ms
	I1002 21:52:00.832138 1183602 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1002 21:52:00.850916 1183602 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1002 21:52:00.854124 1183602 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1002 21:52:00.866315 1183602 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1002 21:52:00.872566 1183602 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1002 21:52:01.356098 1183602 oci.go:107] Successfully prepared a docker volume no-preload-661954
	I1002 21:52:01.356130 1183602 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1002 21:52:01.356254 1183602 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:52:01.356359 1183602 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:52:01.452467 1183602 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-661954 --name no-preload-661954 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-661954 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-661954 --network no-preload-661954 --ip 192.168.85.2 --volume no-preload-661954:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:52:01.477767 1183602 cache.go:157] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1002 21:52:01.477796 1183602 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 1.083280846s
	I1002 21:52:01.477808 1183602 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1002 21:52:01.858364 1183602 cache.go:157] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1002 21:52:01.858394 1183602 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.46334274s
	I1002 21:52:01.858406 1183602 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1002 21:52:01.900834 1183602 cache.go:157] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1002 21:52:01.901128 1183602 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.506257774s
	I1002 21:52:01.901150 1183602 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1002 21:52:01.994551 1183602 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Running}}
	I1002 21:52:02.000416 1183602 cache.go:157] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1002 21:52:02.000453 1183602 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.605366125s
	I1002 21:52:02.000467 1183602 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1002 21:52:02.004013 1183602 cache.go:157] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1002 21:52:02.004041 1183602 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.608770909s
	I1002 21:52:02.004053 1183602 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1002 21:52:02.059697 1183602 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:52:02.142206 1183602 cli_runner.go:164] Run: docker exec no-preload-661954 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:52:02.269782 1183602 oci.go:144] the created container "no-preload-661954" has a running status.
	I1002 21:52:02.269810 1183602 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa...
	I1002 21:52:02.766458 1183602 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:52:02.839965 1183602 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:52:02.885926 1183602 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:52:02.885944 1183602 kic_runner.go:114] Args: [docker exec --privileged no-preload-661954 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:52:02.994541 1183602 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:52:03.025775 1183602 machine.go:93] provisionDockerMachine start ...
	I1002 21:52:03.025873 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:03.090274 1183602 main.go:141] libmachine: Using SSH client type: native
	I1002 21:52:03.090625 1183602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34191 <nil> <nil>}
	I1002 21:52:03.090636 1183602 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:52:03.091346 1183602 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 21:52:03.157966 1183602 cache.go:157] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1002 21:52:03.157997 1183602 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.763131996s
	I1002 21:52:03.158009 1183602 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1002 21:52:03.158020 1183602 cache.go:87] Successfully saved all images to host disk.
	I1002 21:52:04.908120 1182747 cli_runner.go:164] Run: docker network inspect old-k8s-version-714101 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:52:04.924276 1182747 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 21:52:04.928515 1182747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:52:04.938717 1182747 kubeadm.go:883] updating cluster {Name:old-k8s-version-714101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-714101 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:52:04.938842 1182747 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 21:52:04.938901 1182747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:52:04.977483 1182747 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:52:04.977508 1182747 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:52:04.977564 1182747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:52:05.007339 1182747 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:52:05.007367 1182747 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:52:05.007376 1182747 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1002 21:52:05.007518 1182747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-714101 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-714101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:52:05.007633 1182747 ssh_runner.go:195] Run: crio config
	I1002 21:52:05.081704 1182747 cni.go:84] Creating CNI manager for ""
	I1002 21:52:05.081781 1182747 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:52:05.081815 1182747 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:52:05.081867 1182747 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-714101 NodeName:old-k8s-version-714101 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:52:05.082132 1182747 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-714101"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:52:05.082225 1182747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1002 21:52:05.091115 1182747 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:52:05.091221 1182747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:52:05.099165 1182747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1002 21:52:05.112704 1182747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:52:05.126070 1182747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1002 21:52:05.140406 1182747 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:52:05.144163 1182747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:52:05.154420 1182747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:52:05.268809 1182747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:52:05.285244 1182747 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101 for IP: 192.168.76.2
	I1002 21:52:05.285265 1182747 certs.go:195] generating shared ca certs ...
	I1002 21:52:05.285281 1182747 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:05.285441 1182747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:52:05.285506 1182747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:52:05.285522 1182747 certs.go:257] generating profile certs ...
	I1002 21:52:05.285636 1182747 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.key
	I1002 21:52:05.285716 1182747 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/apiserver.key.36c72c45
	I1002 21:52:05.285757 1182747 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/proxy-client.key
	I1002 21:52:05.285893 1182747 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:52:05.285938 1182747 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:52:05.285950 1182747 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:52:05.285979 1182747 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:52:05.286015 1182747 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:52:05.286077 1182747 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:52:05.286129 1182747 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:52:05.286800 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:52:05.308492 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:52:05.326935 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:52:05.345250 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:52:05.363787 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1002 21:52:05.392773 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:52:05.417673 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:52:05.445533 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:52:05.464549 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:52:05.488914 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:52:05.510479 1182747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:52:05.529235 1182747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:52:05.542917 1182747 ssh_runner.go:195] Run: openssl version
	I1002 21:52:05.549087 1182747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:52:05.557773 1182747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:52:05.561509 1182747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:52:05.561572 1182747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:52:05.603241 1182747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:52:05.611484 1182747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:52:05.619995 1182747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:52:05.624535 1182747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:52:05.624603 1182747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:52:05.665555 1182747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:52:05.673492 1182747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:52:05.681686 1182747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:52:05.685348 1182747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:52:05.685413 1182747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:52:05.726677 1182747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:52:05.734486 1182747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:52:05.738318 1182747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:52:05.779220 1182747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:52:05.820214 1182747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:52:05.860965 1182747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:52:05.903878 1182747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:52:05.949837 1182747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:52:06.006706 1182747 kubeadm.go:400] StartCluster: {Name:old-k8s-version-714101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-714101 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:52:06.006806 1182747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:52:06.006897 1182747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:52:06.132960 1182747 cri.go:89] found id: "5a83b3b5fdd1829410dd940329aebf65d78fdcbe04a70857ada30dd577e810fe"
	I1002 21:52:06.132987 1182747 cri.go:89] found id: "c3aedcafe119fc8bb499f43ec599913cb04664925ad686de580218aff808d473"
	I1002 21:52:06.132992 1182747 cri.go:89] found id: "3c684fbd5a7c318f98ee71b7292da5395690fd5e263ff9214c18a7732d8adeeb"
	I1002 21:52:06.133004 1182747 cri.go:89] found id: "d36efcb47e31c46319889a12e9d7ca6fbd109d48b8e4129bed48fb9a558f3d81"
	I1002 21:52:06.133007 1182747 cri.go:89] found id: ""
	I1002 21:52:06.133055 1182747 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 21:52:06.157820 1182747 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:52:06Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:52:06.157914 1182747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:52:06.167967 1182747 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:52:06.167990 1182747 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:52:06.168051 1182747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:52:06.178260 1182747 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:52:06.178722 1182747 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-714101" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:52:06.178844 1182747 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-992084/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-714101" cluster setting kubeconfig missing "old-k8s-version-714101" context setting]
	I1002 21:52:06.179174 1182747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:06.180783 1182747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:52:06.197499 1182747 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 21:52:06.197544 1182747 kubeadm.go:601] duration metric: took 29.547421ms to restartPrimaryControlPlane
	I1002 21:52:06.197555 1182747 kubeadm.go:402] duration metric: took 190.86711ms to StartCluster
	I1002 21:52:06.197570 1182747 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:06.197655 1182747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:52:06.198353 1182747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:06.198595 1182747 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:52:06.199024 1182747 config.go:182] Loaded profile config "old-k8s-version-714101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 21:52:06.199018 1182747 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:52:06.199138 1182747 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-714101"
	I1002 21:52:06.199152 1182747 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-714101"
	W1002 21:52:06.199169 1182747 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:52:06.199193 1182747 host.go:66] Checking if "old-k8s-version-714101" exists ...
	I1002 21:52:06.199634 1182747 cli_runner.go:164] Run: docker container inspect old-k8s-version-714101 --format={{.State.Status}}
	I1002 21:52:06.199810 1182747 addons.go:69] Setting dashboard=true in profile "old-k8s-version-714101"
	I1002 21:52:06.199850 1182747 addons.go:238] Setting addon dashboard=true in "old-k8s-version-714101"
	W1002 21:52:06.199881 1182747 addons.go:247] addon dashboard should already be in state true
	I1002 21:52:06.199920 1182747 host.go:66] Checking if "old-k8s-version-714101" exists ...
	I1002 21:52:06.200130 1182747 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-714101"
	I1002 21:52:06.200155 1182747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-714101"
	I1002 21:52:06.200410 1182747 cli_runner.go:164] Run: docker container inspect old-k8s-version-714101 --format={{.State.Status}}
	I1002 21:52:06.200465 1182747 cli_runner.go:164] Run: docker container inspect old-k8s-version-714101 --format={{.State.Status}}
	I1002 21:52:06.203676 1182747 out.go:179] * Verifying Kubernetes components...
	I1002 21:52:06.209938 1182747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:52:06.290867 1182747 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 21:52:06.290927 1182747 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:06.294211 1182747 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:52:06.294226 1182747 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-714101"
	W1002 21:52:06.294243 1182747 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:52:06.294268 1182747 host.go:66] Checking if "old-k8s-version-714101" exists ...
	I1002 21:52:06.295812 1182747 cli_runner.go:164] Run: docker container inspect old-k8s-version-714101 --format={{.State.Status}}
	I1002 21:52:06.294231 1182747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:52:06.296065 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:06.304292 1182747 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 21:52:06.310910 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 21:52:06.310937 1182747 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 21:52:06.311020 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:06.372421 1182747 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:52:06.372441 1182747 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:52:06.372508 1182747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-714101
	I1002 21:52:06.373915 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:06.374429 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:06.411460 1182747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34186 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/old-k8s-version-714101/id_rsa Username:docker}
	I1002 21:52:06.632669 1182747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:52:06.690703 1182747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:52:06.699296 1182747 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-714101" to be "Ready" ...
	I1002 21:52:06.726586 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 21:52:06.726611 1182747 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 21:52:06.766697 1182747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:52:06.782303 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 21:52:06.782324 1182747 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 21:52:06.910398 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 21:52:06.910422 1182747 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 21:52:06.282412 1183602 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-661954
	
	I1002 21:52:06.282436 1183602 ubuntu.go:182] provisioning hostname "no-preload-661954"
	I1002 21:52:06.282502 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:06.335705 1183602 main.go:141] libmachine: Using SSH client type: native
	I1002 21:52:06.336022 1183602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34191 <nil> <nil>}
	I1002 21:52:06.336034 1183602 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-661954 && echo "no-preload-661954" | sudo tee /etc/hostname
	I1002 21:52:06.559460 1183602 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-661954
	
	I1002 21:52:06.559533 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:06.580631 1183602 main.go:141] libmachine: Using SSH client type: native
	I1002 21:52:06.580940 1183602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34191 <nil> <nil>}
	I1002 21:52:06.580957 1183602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-661954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-661954/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-661954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:52:06.750628 1183602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:52:06.750708 1183602 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:52:06.750746 1183602 ubuntu.go:190] setting up certificates
	I1002 21:52:06.750786 1183602 provision.go:84] configureAuth start
	I1002 21:52:06.750890 1183602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-661954
	I1002 21:52:06.782565 1183602 provision.go:143] copyHostCerts
	I1002 21:52:06.782627 1183602 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:52:06.782637 1183602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:52:06.782720 1183602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:52:06.782811 1183602 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:52:06.782817 1183602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:52:06.782842 1183602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:52:06.782892 1183602 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:52:06.782897 1183602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:52:06.782919 1183602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:52:06.782973 1183602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.no-preload-661954 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-661954]
	I1002 21:52:07.005870 1183602 provision.go:177] copyRemoteCerts
	I1002 21:52:07.005996 1183602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:52:07.006091 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:07.030249 1183602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:52:07.136071 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:52:07.171937 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 21:52:07.200316 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:52:07.236860 1183602 provision.go:87] duration metric: took 486.034622ms to configureAuth
	I1002 21:52:07.236891 1183602 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:52:07.237070 1183602 config.go:182] Loaded profile config "no-preload-661954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:52:07.237174 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:07.259036 1183602 main.go:141] libmachine: Using SSH client type: native
	I1002 21:52:07.259426 1183602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34191 <nil> <nil>}
	I1002 21:52:07.259474 1183602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:52:07.627312 1183602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:52:07.627386 1183602 machine.go:96] duration metric: took 4.601588502s to provisionDockerMachine
	I1002 21:52:07.627411 1183602 client.go:171] duration metric: took 7.15554567s to LocalClient.Create
	I1002 21:52:07.627437 1183602 start.go:168] duration metric: took 7.155661236s to libmachine.API.Create "no-preload-661954"
	I1002 21:52:07.627480 1183602 start.go:294] postStartSetup for "no-preload-661954" (driver="docker")
	I1002 21:52:07.627503 1183602 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:52:07.627599 1183602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:52:07.627671 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:07.658223 1183602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:52:07.779630 1183602 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:52:07.783328 1183602 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:52:07.783359 1183602 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:52:07.783371 1183602 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:52:07.783438 1183602 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:52:07.783522 1183602 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:52:07.783630 1183602 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:52:07.795930 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:52:07.823880 1183602 start.go:297] duration metric: took 196.372991ms for postStartSetup
	I1002 21:52:07.824240 1183602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-661954
	I1002 21:52:07.851385 1183602 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/config.json ...
	I1002 21:52:07.851661 1183602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:52:07.851700 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:07.890144 1183602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:52:07.998431 1183602 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:52:08.010193 1183602 start.go:129] duration metric: took 7.551624447s to createHost
	I1002 21:52:08.010234 1183602 start.go:84] releasing machines lock for "no-preload-661954", held for 7.551758983s
	I1002 21:52:08.010307 1183602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-661954
	I1002 21:52:08.031230 1183602 ssh_runner.go:195] Run: cat /version.json
	I1002 21:52:08.031285 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:08.031354 1183602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:52:08.031428 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:08.052848 1183602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:52:08.075531 1183602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:52:08.178315 1183602 ssh_runner.go:195] Run: systemctl --version
	I1002 21:52:08.291334 1183602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:52:08.337419 1183602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:52:08.342412 1183602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:52:08.342482 1183602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:52:08.393326 1183602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 21:52:08.393349 1183602 start.go:496] detecting cgroup driver to use...
	I1002 21:52:08.393380 1183602 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:52:08.393436 1183602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:52:08.412206 1183602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:52:08.425781 1183602 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:52:08.425842 1183602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:52:08.444808 1183602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:52:08.463810 1183602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:52:08.679064 1183602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:52:08.898670 1183602 docker.go:234] disabling docker service ...
	I1002 21:52:08.898742 1183602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:52:08.942406 1183602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:52:08.960859 1183602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:52:09.159789 1183602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:52:09.371617 1183602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:52:09.392037 1183602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:52:09.421836 1183602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:52:09.421959 1183602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:09.448449 1183602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:52:09.448577 1183602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:09.466577 1183602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:09.484718 1183602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:09.496716 1183602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:52:09.510273 1183602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:09.524773 1183602 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:09.547373 1183602 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:52:09.557919 1183602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:52:09.567316 1183602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:52:09.579961 1183602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:52:09.785054 1183602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:52:09.964271 1183602 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:52:09.964413 1183602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:52:09.970450 1183602 start.go:564] Will wait 60s for crictl version
	I1002 21:52:09.970566 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:09.976759 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:52:10.031709 1183602 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:52:10.031817 1183602 ssh_runner.go:195] Run: crio --version
	I1002 21:52:10.089399 1183602 ssh_runner.go:195] Run: crio --version
	I1002 21:52:10.147130 1183602 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:52:07.047321 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 21:52:07.047346 1182747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 21:52:07.115188 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 21:52:07.115213 1182747 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 21:52:07.206364 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 21:52:07.206391 1182747 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 21:52:07.275539 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 21:52:07.275565 1182747 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 21:52:07.303139 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 21:52:07.303167 1182747 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 21:52:07.351056 1182747 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:52:07.351082 1182747 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 21:52:07.389460 1182747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:52:10.150001 1183602 cli_runner.go:164] Run: docker network inspect no-preload-661954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:52:10.183806 1183602 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 21:52:10.188228 1183602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:52:10.204098 1183602 kubeadm.go:883] updating cluster {Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:52:10.204206 1183602 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:52:10.204248 1183602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:52:10.242487 1183602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1002 21:52:10.242509 1183602 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 21:52:10.242544 1183602 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:10.242733 1183602 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:10.242821 1183602 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:10.242900 1183602 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:10.242971 1183602 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:10.243040 1183602 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1002 21:52:10.243116 1183602 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:10.243196 1183602 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:10.246055 1183602 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:10.246307 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:10.246440 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:10.246559 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:10.246673 1183602 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:10.246965 1183602 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1002 21:52:10.247108 1183602 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:10.247290 1183602 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:10.463117 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:10.492446 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:10.519939 1183602 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1002 21:52:10.519984 1183602 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:10.520040 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:10.520583 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:10.540281 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:10.541174 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1002 21:52:10.557997 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:10.561529 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:10.632637 1183602 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1002 21:52:10.632699 1183602 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:10.632761 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:10.660771 1183602 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1002 21:52:10.660898 1183602 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:10.660855 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:10.660981 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:10.797576 1183602 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1002 21:52:10.797657 1183602 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:10.797736 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:10.797843 1183602 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1002 21:52:10.797879 1183602 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1002 21:52:10.797921 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:10.798020 1183602 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1002 21:52:10.798087 1183602 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:10.798131 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:10.831655 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:10.831734 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:10.831800 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:10.831898 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:10.831946 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:10.831972 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1002 21:52:10.832182 1183602 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1002 21:52:10.832227 1183602 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:10.832264 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:10.991325 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:10.991412 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:10.991471 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:10.991526 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:52:10.991576 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:10.991634 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:10.991682 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1002 21:52:11.204821 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1002 21:52:11.204967 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:11.205049 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:52:11.205134 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:52:11.205209 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1002 21:52:11.205303 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1002 21:52:11.205391 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:52:11.205485 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1002 21:52:11.409976 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1002 21:52:11.410105 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:52:11.410164 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1002 21:52:11.410206 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1002 21:52:11.410005 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1002 21:52:11.410278 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1002 21:52:11.410332 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1002 21:52:11.410353 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1002 21:52:11.410413 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1002 21:52:11.410446 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1002 21:52:11.410481 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1002 21:52:11.410494 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1002 21:52:11.410602 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1002 21:52:11.538914 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1002 21:52:11.538964 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1002 21:52:11.539045 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1002 21:52:11.539137 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1002 21:52:11.539197 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1002 21:52:11.539215 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1002 21:52:11.539265 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1002 21:52:11.539289 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1002 21:52:11.539357 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1002 21:52:11.539374 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1002 21:52:11.539435 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1002 21:52:11.539454 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	W1002 21:52:11.576414 1183602 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1002 21:52:11.576622 1183602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:11.635994 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1002 21:52:11.636038 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1002 21:52:11.711734 1183602 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1002 21:52:11.711838 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1002 21:52:11.840730 1183602 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1002 21:52:11.840778 1183602 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:11.840828 1183602 ssh_runner.go:195] Run: which crictl
	I1002 21:52:12.171791 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:12.171867 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1002 21:52:12.321584 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:12.368610 1183602 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1002 21:52:12.368694 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1002 21:52:12.562198 1183602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:12.606117 1182747 node_ready.go:49] node "old-k8s-version-714101" is "Ready"
	I1002 21:52:12.606146 1182747 node_ready.go:38] duration metric: took 5.906817679s for node "old-k8s-version-714101" to be "Ready" ...
	I1002 21:52:12.606159 1182747 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:52:12.606219 1182747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:52:15.820626 1182747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.129890195s)
	I1002 21:52:15.820697 1182747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.053981117s)
	I1002 21:52:16.516859 1182747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.127353559s)
	I1002 21:52:16.517072 1182747 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.910836562s)
	I1002 21:52:16.517100 1182747 api_server.go:72] duration metric: took 10.318460748s to wait for apiserver process to appear ...
	I1002 21:52:16.517106 1182747 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:52:16.517122 1182747 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:52:16.520395 1182747 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-714101 addons enable metrics-server
	
	I1002 21:52:16.523493 1182747 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1002 21:52:16.526519 1182747 addons.go:514] duration metric: took 10.3274936s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1002 21:52:16.528431 1182747 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 21:52:16.529944 1182747 api_server.go:141] control plane version: v1.28.0
	I1002 21:52:16.529962 1182747 api_server.go:131] duration metric: took 12.849901ms to wait for apiserver health ...
	I1002 21:52:16.529971 1182747 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:52:16.543566 1182747 system_pods.go:59] 8 kube-system pods found
	I1002 21:52:16.543654 1182747 system_pods.go:61] "coredns-5dd5756b68-f7qdk" [848cb78b-98da-49f0-ab85-a772e528b803] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:52:16.543682 1182747 system_pods.go:61] "etcd-old-k8s-version-714101" [0966d28a-21e6-417e-8aed-41590aa75beb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:52:16.543723 1182747 system_pods.go:61] "kindnet-qgs2b" [4f2179e4-429f-4a72-886a-c6a3e321a396] Running
	I1002 21:52:16.543752 1182747 system_pods.go:61] "kube-apiserver-old-k8s-version-714101" [b247aaad-25ed-457f-ad85-afbaccf7bc72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:52:16.543776 1182747 system_pods.go:61] "kube-controller-manager-old-k8s-version-714101" [7ea997bf-4afe-409e-bcb9-ea894e8f83e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:52:16.543812 1182747 system_pods.go:61] "kube-proxy-9ktm4" [902dc118-e33e-4d60-8711-8394ffefed71] Running
	I1002 21:52:16.543848 1182747 system_pods.go:61] "kube-scheduler-old-k8s-version-714101" [3d6d0916-6ad4-4a46-ba43-bb0812e6ccd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:52:16.543868 1182747 system_pods.go:61] "storage-provisioner" [84b9ee34-40ec-4d3f-9171-c7a8578abb2b] Running
	I1002 21:52:16.543900 1182747 system_pods.go:74] duration metric: took 13.922784ms to wait for pod list to return data ...
	I1002 21:52:16.543926 1182747 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:52:16.552592 1182747 default_sa.go:45] found service account: "default"
	I1002 21:52:16.552612 1182747 default_sa.go:55] duration metric: took 8.668112ms for default service account to be created ...
	I1002 21:52:16.552621 1182747 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:52:16.558730 1182747 system_pods.go:86] 8 kube-system pods found
	I1002 21:52:16.558758 1182747 system_pods.go:89] "coredns-5dd5756b68-f7qdk" [848cb78b-98da-49f0-ab85-a772e528b803] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:52:16.558767 1182747 system_pods.go:89] "etcd-old-k8s-version-714101" [0966d28a-21e6-417e-8aed-41590aa75beb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:52:16.558773 1182747 system_pods.go:89] "kindnet-qgs2b" [4f2179e4-429f-4a72-886a-c6a3e321a396] Running
	I1002 21:52:16.558780 1182747 system_pods.go:89] "kube-apiserver-old-k8s-version-714101" [b247aaad-25ed-457f-ad85-afbaccf7bc72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:52:16.558786 1182747 system_pods.go:89] "kube-controller-manager-old-k8s-version-714101" [7ea997bf-4afe-409e-bcb9-ea894e8f83e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:52:16.558791 1182747 system_pods.go:89] "kube-proxy-9ktm4" [902dc118-e33e-4d60-8711-8394ffefed71] Running
	I1002 21:52:16.558797 1182747 system_pods.go:89] "kube-scheduler-old-k8s-version-714101" [3d6d0916-6ad4-4a46-ba43-bb0812e6ccd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:52:16.558800 1182747 system_pods.go:89] "storage-provisioner" [84b9ee34-40ec-4d3f-9171-c7a8578abb2b] Running
	I1002 21:52:16.558808 1182747 system_pods.go:126] duration metric: took 6.180555ms to wait for k8s-apps to be running ...
	I1002 21:52:16.558815 1182747 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:52:16.558870 1182747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:52:16.583260 1182747 system_svc.go:56] duration metric: took 24.433858ms WaitForService to wait for kubelet
	I1002 21:52:16.583289 1182747 kubeadm.go:586] duration metric: took 10.384658164s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:52:16.583311 1182747 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:52:16.588814 1182747 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:52:16.588852 1182747 node_conditions.go:123] node cpu capacity is 2
	I1002 21:52:16.588865 1182747 node_conditions.go:105] duration metric: took 5.548711ms to run NodePressure ...
	I1002 21:52:16.588878 1182747 start.go:242] waiting for startup goroutines ...
	I1002 21:52:16.588886 1182747 start.go:247] waiting for cluster config update ...
	I1002 21:52:16.588897 1182747 start.go:256] writing updated cluster config ...
	I1002 21:52:16.589162 1182747 ssh_runner.go:195] Run: rm -f paused
	I1002 21:52:16.597225 1182747 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:52:16.607644 1182747 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-f7qdk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:15.599272 1183602 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (3.230549708s)
	I1002 21:52:15.599305 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1002 21:52:15.599322 1183602 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1002 21:52:15.599378 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1002 21:52:15.599438 1183602 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.037215874s)
	I1002 21:52:15.599486 1183602 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 21:52:15.599566 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1002 21:52:17.568676 1183602 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.969082899s)
	I1002 21:52:17.568711 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1002 21:52:17.568735 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1002 21:52:17.568870 1183602 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.969478335s)
	I1002 21:52:17.568887 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1002 21:52:17.568903 1183602 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1002 21:52:17.568959 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1002 21:52:19.014580 1183602 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.445601107s)
	I1002 21:52:19.014609 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1002 21:52:19.014629 1183602 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1002 21:52:19.014677 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	W1002 21:52:18.614599 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	W1002 21:52:20.618578 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	I1002 21:52:20.188891 1183602 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.174185049s)
	I1002 21:52:20.188919 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1002 21:52:20.188938 1183602 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1002 21:52:20.188984 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1002 21:52:21.615829 1183602 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.426811443s)
	I1002 21:52:21.615858 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1002 21:52:21.615883 1183602 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1002 21:52:21.615929 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	W1002 21:52:23.114918 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	W1002 21:52:25.115584 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	I1002 21:52:25.272273 1183602 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.656315467s)
	I1002 21:52:25.272299 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1002 21:52:25.272317 1183602 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1002 21:52:25.272369 1183602 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1002 21:52:25.902517 1183602 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1002 21:52:25.902550 1183602 cache_images.go:124] Successfully loaded all cached images
	I1002 21:52:25.902556 1183602 cache_images.go:93] duration metric: took 15.660034703s to LoadCachedImages
	I1002 21:52:25.902566 1183602 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 21:52:25.902654 1183602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-661954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:52:25.902734 1183602 ssh_runner.go:195] Run: crio config
	I1002 21:52:25.973838 1183602 cni.go:84] Creating CNI manager for ""
	I1002 21:52:25.973869 1183602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:52:25.973888 1183602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:52:25.973912 1183602 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-661954 NodeName:no-preload-661954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:52:25.974063 1183602 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-661954"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:52:25.974145 1183602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:52:25.982616 1183602 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1002 21:52:25.982679 1183602 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1002 21:52:25.990991 1183602 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1002 21:52:25.991116 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1002 21:52:25.991940 1183602 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1002 21:52:25.991939 1183602 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1002 21:52:25.996153 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1002 21:52:25.996207 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1002 21:52:27.259110 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1002 21:52:27.263321 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1002 21:52:27.263378 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1002 21:52:27.267502 1183602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:52:27.303800 1183602 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1002 21:52:27.331735 1183602 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1002 21:52:27.331830 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1002 21:52:28.076956 1183602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:52:28.104105 1183602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 21:52:28.134631 1183602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:52:28.162166 1183602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1002 21:52:28.193332 1183602 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:52:28.203263 1183602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:52:28.214521 1183602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:52:28.384599 1183602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:52:28.408688 1183602 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954 for IP: 192.168.85.2
	I1002 21:52:28.408761 1183602 certs.go:195] generating shared ca certs ...
	I1002 21:52:28.408792 1183602 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:28.408984 1183602 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:52:28.409066 1183602 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:52:28.409094 1183602 certs.go:257] generating profile certs ...
	I1002 21:52:28.409177 1183602 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.key
	I1002 21:52:28.409219 1183602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt with IP's: []
	I1002 21:52:29.200941 1183602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt ...
	I1002 21:52:29.200970 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: {Name:mk3043b5efd47e137543aa61b0e942b7285caeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:29.201147 1183602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.key ...
	I1002 21:52:29.201163 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.key: {Name:mk0fc54489a5bd53f8de9284e56b6b1960465035 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:29.201244 1183602 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key.ffe6e5b4
	I1002 21:52:29.201264 1183602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.crt.ffe6e5b4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 21:52:30.269819 1183602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.crt.ffe6e5b4 ...
	I1002 21:52:30.269853 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.crt.ffe6e5b4: {Name:mke6b2152d98df0dbd9b59ac789c14469c552e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:30.270076 1183602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key.ffe6e5b4 ...
	I1002 21:52:30.270093 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key.ffe6e5b4: {Name:mk566bf146cd3961f59f68f90b034cfea289f2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:30.270196 1183602 certs.go:382] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.crt.ffe6e5b4 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.crt
	I1002 21:52:30.270279 1183602 certs.go:386] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key.ffe6e5b4 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key
	I1002 21:52:30.270344 1183602 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.key
	I1002 21:52:30.270363 1183602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.crt with IP's: []
	I1002 21:52:31.099977 1183602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.crt ...
	I1002 21:52:31.100051 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.crt: {Name:mkd6fffa7f1694d88a53de97fbabde36d0fd81bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:31.100280 1183602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.key ...
	I1002 21:52:31.100316 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.key: {Name:mk3ce58ca0666401bdf7ac0d09ce5e4906dda4e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:31.100585 1183602 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:52:31.100654 1183602 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:52:31.100696 1183602 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:52:31.100750 1183602 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:52:31.100809 1183602 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:52:31.100855 1183602 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:52:31.100941 1183602 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:52:31.101581 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:52:31.136493 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:52:31.157145 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:52:31.177036 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:52:31.195051 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:52:31.212376 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:52:31.229859 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:52:31.248452 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:52:31.265622 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:52:31.283263 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:52:31.300415 1183602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:52:31.320782 1183602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:52:31.333539 1183602 ssh_runner.go:195] Run: openssl version
	I1002 21:52:31.346531 1183602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:52:31.355545 1183602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:52:31.360535 1183602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:52:31.360647 1183602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:52:31.403791 1183602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:52:31.411967 1183602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:52:31.423398 1183602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:52:31.427507 1183602 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:52:31.427621 1183602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:52:31.486535 1183602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:52:31.494999 1183602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:52:31.503346 1183602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:52:31.516373 1183602 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:52:31.516501 1183602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:52:31.559691 1183602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:52:31.576553 1183602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:52:31.588264 1183602 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:52:31.588385 1183602 kubeadm.go:400] StartCluster: {Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:52:31.588485 1183602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:52:31.588579 1183602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:52:31.660615 1183602 cri.go:89] found id: ""
	I1002 21:52:31.660763 1183602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:52:31.673220 1183602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:52:31.687368 1183602 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:52:31.687479 1183602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:52:31.700264 1183602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:52:31.700322 1183602 kubeadm.go:157] found existing configuration files:
	
	I1002 21:52:31.700415 1183602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:52:31.709633 1183602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:52:31.709743 1183602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:52:31.721421 1183602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:52:31.735792 1183602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:52:31.735924 1183602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:52:31.743148 1183602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:52:31.752173 1183602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:52:31.752241 1183602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:52:31.763997 1183602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:52:31.772498 1183602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:52:31.772629 1183602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:52:31.782422 1183602 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:52:31.832657 1183602 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:52:31.833085 1183602 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:52:31.892150 1183602 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:52:31.892321 1183602 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:52:31.892393 1183602 kubeadm.go:318] OS: Linux
	I1002 21:52:31.892478 1183602 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:52:31.892567 1183602 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:52:31.892675 1183602 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:52:31.892772 1183602 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:52:31.892874 1183602 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:52:31.892971 1183602 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:52:31.893059 1183602 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:52:31.893154 1183602 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:52:31.893243 1183602 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:52:32.005768 1183602 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:52:32.005885 1183602 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:52:32.005981 1183602 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:52:32.029562 1183602 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1002 21:52:27.623145 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	W1002 21:52:30.120811 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	I1002 21:52:32.036102 1183602 out.go:252]   - Generating certificates and keys ...
	I1002 21:52:32.036214 1183602 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:52:32.036287 1183602 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:52:32.362394 1183602 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:52:32.766331 1183602 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:52:33.485776 1183602 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:52:34.042469 1183602 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:52:34.633016 1183602 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:52:34.634435 1183602 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-661954] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 21:52:34.748402 1183602 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:52:34.750393 1183602 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-661954] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 21:52:34.843060 1183602 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	W1002 21:52:32.616702 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	W1002 21:52:34.631227 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	I1002 21:52:35.398425 1183602 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:52:35.711364 1183602 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:52:35.711748 1183602 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:52:36.019472 1183602 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:52:36.248579 1183602 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:52:36.522602 1183602 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:52:37.203209 1183602 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:52:37.999514 1183602 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:52:38.000356 1183602 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:52:38.003130 1183602 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:52:38.007414 1183602 out.go:252]   - Booting up control plane ...
	I1002 21:52:38.007542 1183602 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:52:38.007630 1183602 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:52:38.007705 1183602 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:52:38.025758 1183602 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:52:38.025872 1183602 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:52:38.034878 1183602 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:52:38.035560 1183602 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:52:38.035847 1183602 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:52:38.213509 1183602 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:52:38.213642 1183602 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:52:39.215095 1183602 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001654449s
	I1002 21:52:39.218499 1183602 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:52:39.218599 1183602 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 21:52:39.218909 1183602 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:52:39.218999 1183602 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1002 21:52:37.119201 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	W1002 21:52:39.614815 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	W1002 21:52:41.625232 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	I1002 21:52:43.389464 1183602 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.170574362s
	I1002 21:52:44.643428 1183602 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.424872283s
	I1002 21:52:46.725656 1183602 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.505046261s
	I1002 21:52:46.744381 1183602 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:52:46.761249 1183602 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:52:46.775924 1183602 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:52:46.776156 1183602 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-661954 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 21:52:46.788301 1183602 kubeadm.go:318] [bootstrap-token] Using token: di0bi5.u1ybuxaty6dqdvqe
	W1002 21:52:44.114536 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	W1002 21:52:46.612987 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	I1002 21:52:46.791209 1183602 out.go:252]   - Configuring RBAC rules ...
	I1002 21:52:46.791347 1183602 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:52:46.795767 1183602 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:52:46.804818 1183602 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:52:46.811490 1183602 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:52:46.815607 1183602 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:52:46.820259 1183602 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:52:47.131604 1183602 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:52:47.590451 1183602 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 21:52:48.130678 1183602 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 21:52:48.133160 1183602 kubeadm.go:318] 
	I1002 21:52:48.133246 1183602 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 21:52:48.133261 1183602 kubeadm.go:318] 
	I1002 21:52:48.133343 1183602 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 21:52:48.133355 1183602 kubeadm.go:318] 
	I1002 21:52:48.133382 1183602 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 21:52:48.133471 1183602 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:52:48.133571 1183602 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:52:48.133585 1183602 kubeadm.go:318] 
	I1002 21:52:48.133649 1183602 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 21:52:48.133655 1183602 kubeadm.go:318] 
	I1002 21:52:48.133729 1183602 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 21:52:48.133735 1183602 kubeadm.go:318] 
	I1002 21:52:48.133800 1183602 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 21:52:48.133914 1183602 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:52:48.134003 1183602 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:52:48.134013 1183602 kubeadm.go:318] 
	I1002 21:52:48.134132 1183602 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:52:48.134255 1183602 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 21:52:48.134267 1183602 kubeadm.go:318] 
	I1002 21:52:48.134377 1183602 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token di0bi5.u1ybuxaty6dqdvqe \
	I1002 21:52:48.134517 1183602 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 \
	I1002 21:52:48.134551 1183602 kubeadm.go:318] 	--control-plane 
	I1002 21:52:48.134566 1183602 kubeadm.go:318] 
	I1002 21:52:48.134672 1183602 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:52:48.134683 1183602 kubeadm.go:318] 
	I1002 21:52:48.134794 1183602 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token di0bi5.u1ybuxaty6dqdvqe \
	I1002 21:52:48.134923 1183602 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 
	I1002 21:52:48.138480 1183602 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:52:48.138728 1183602 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:52:48.138848 1183602 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:52:48.138870 1183602 cni.go:84] Creating CNI manager for ""
	I1002 21:52:48.138878 1183602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:52:48.142100 1183602 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 21:52:48.145059 1183602 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:52:48.151414 1183602 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 21:52:48.151439 1183602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 21:52:48.166991 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:52:48.474839 1183602 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:52:48.475012 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:48.475111 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-661954 minikube.k8s.io/updated_at=2025_10_02T21_52_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b minikube.k8s.io/name=no-preload-661954 minikube.k8s.io/primary=true
	I1002 21:52:48.708022 1183602 ops.go:34] apiserver oom_adj: -16
	I1002 21:52:48.708127 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:49.209066 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:49.708269 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1002 21:52:48.613956 1182747 pod_ready.go:104] pod "coredns-5dd5756b68-f7qdk" is not "Ready", error: <nil>
	I1002 21:52:51.114910 1182747 pod_ready.go:94] pod "coredns-5dd5756b68-f7qdk" is "Ready"
	I1002 21:52:51.114943 1182747 pod_ready.go:86] duration metric: took 34.507271712s for pod "coredns-5dd5756b68-f7qdk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.118109 1182747 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.123967 1182747 pod_ready.go:94] pod "etcd-old-k8s-version-714101" is "Ready"
	I1002 21:52:51.124005 1182747 pod_ready.go:86] duration metric: took 5.868326ms for pod "etcd-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.127901 1182747 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.133651 1182747 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-714101" is "Ready"
	I1002 21:52:51.133687 1182747 pod_ready.go:86] duration metric: took 5.716345ms for pod "kube-apiserver-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.137091 1182747 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.311463 1182747 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-714101" is "Ready"
	I1002 21:52:51.311493 1182747 pod_ready.go:86] duration metric: took 174.3747ms for pod "kube-controller-manager-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.512464 1182747 pod_ready.go:83] waiting for pod "kube-proxy-9ktm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:51.911836 1182747 pod_ready.go:94] pod "kube-proxy-9ktm4" is "Ready"
	I1002 21:52:51.911866 1182747 pod_ready.go:86] duration metric: took 399.373304ms for pod "kube-proxy-9ktm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:50.208309 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:50.708288 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:51.208500 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:51.708453 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:52.208399 1183602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:52:52.305731 1183602 kubeadm.go:1113] duration metric: took 3.830801358s to wait for elevateKubeSystemPrivileges
	I1002 21:52:52.305759 1183602 kubeadm.go:402] duration metric: took 20.717380405s to StartCluster
	I1002 21:52:52.305776 1183602 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:52.305835 1183602 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:52:52.306902 1183602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:52:52.307124 1183602 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:52:52.307204 1183602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 21:52:52.307434 1183602 config.go:182] Loaded profile config "no-preload-661954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:52:52.307460 1183602 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:52:52.307546 1183602 addons.go:69] Setting storage-provisioner=true in profile "no-preload-661954"
	I1002 21:52:52.307553 1183602 addons.go:69] Setting default-storageclass=true in profile "no-preload-661954"
	I1002 21:52:52.307561 1183602 addons.go:238] Setting addon storage-provisioner=true in "no-preload-661954"
	I1002 21:52:52.307568 1183602 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-661954"
	I1002 21:52:52.307588 1183602 host.go:66] Checking if "no-preload-661954" exists ...
	I1002 21:52:52.307920 1183602 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:52:52.308077 1183602 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:52:52.311070 1183602 out.go:179] * Verifying Kubernetes components...
	I1002 21:52:52.314169 1183602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:52:52.364059 1183602 addons.go:238] Setting addon default-storageclass=true in "no-preload-661954"
	I1002 21:52:52.364099 1183602 host.go:66] Checking if "no-preload-661954" exists ...
	I1002 21:52:52.364514 1183602 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:52:52.371867 1183602 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:52:52.111920 1182747 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:52.511930 1182747 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-714101" is "Ready"
	I1002 21:52:52.511960 1182747 pod_ready.go:86] duration metric: took 400.013911ms for pod "kube-scheduler-old-k8s-version-714101" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:52:52.511973 1182747 pod_ready.go:40] duration metric: took 35.914713644s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:52:52.615971 1182747 start.go:627] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1002 21:52:52.618896 1182747 out.go:203] 
	W1002 21:52:52.621735 1182747 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1002 21:52:52.624508 1182747 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1002 21:52:52.627525 1182747 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-714101" cluster and "default" namespace by default
	I1002 21:52:52.374931 1183602 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:52:52.374960 1183602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:52:52.375025 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:52.405200 1183602 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:52:52.405219 1183602 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:52:52.405282 1183602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:52:52.421776 1183602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:52:52.447876 1183602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34191 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:52:52.748782 1183602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:52:52.748836 1183602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 21:52:52.863097 1183602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:52:52.991633 1183602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:52:53.731441 1183602 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1002 21:52:53.733422 1183602 node_ready.go:35] waiting up to 6m0s for node "no-preload-661954" to be "Ready" ...
	I1002 21:52:54.045904 1183602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.054235822s)
	I1002 21:52:54.046608 1183602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.183476615s)
	I1002 21:52:54.076724 1183602 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 21:52:54.079821 1183602 addons.go:514] duration metric: took 1.772351703s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 21:52:54.237862 1183602 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-661954" context rescaled to 1 replicas
	W1002 21:52:55.737074 1183602 node_ready.go:57] node "no-preload-661954" has "Ready":"False" status (will retry)
	W1002 21:52:57.738702 1183602 node_ready.go:57] node "no-preload-661954" has "Ready":"False" status (will retry)
	W1002 21:53:00.248307 1183602 node_ready.go:57] node "no-preload-661954" has "Ready":"False" status (will retry)
	W1002 21:53:02.736781 1183602 node_ready.go:57] node "no-preload-661954" has "Ready":"False" status (will retry)
	W1002 21:53:04.736884 1183602 node_ready.go:57] node "no-preload-661954" has "Ready":"False" status (will retry)
	W1002 21:53:06.740422 1183602 node_ready.go:57] node "no-preload-661954" has "Ready":"False" status (will retry)
	I1002 21:53:07.236497 1183602 node_ready.go:49] node "no-preload-661954" is "Ready"
	I1002 21:53:07.236523 1183602 node_ready.go:38] duration metric: took 13.503070959s for node "no-preload-661954" to be "Ready" ...
	I1002 21:53:07.236536 1183602 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:53:07.236602 1183602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:53:07.262515 1183602 api_server.go:72] duration metric: took 14.955361917s to wait for apiserver process to appear ...
	I1002 21:53:07.262536 1183602 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:53:07.262554 1183602 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:53:07.281178 1183602 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 21:53:07.282689 1183602 api_server.go:141] control plane version: v1.34.1
	I1002 21:53:07.282715 1183602 api_server.go:131] duration metric: took 20.17171ms to wait for apiserver health ...
	I1002 21:53:07.282723 1183602 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:53:07.289157 1183602 system_pods.go:59] 8 kube-system pods found
	I1002 21:53:07.289192 1183602 system_pods.go:61] "coredns-66bc5c9577-ddsr2" [af6a1936-ec5c-4c31-9f22-73cc6f7042c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:53:07.289202 1183602 system_pods.go:61] "etcd-no-preload-661954" [496cecf9-be4d-453d-aee5-11ec78563118] Running
	I1002 21:53:07.289208 1183602 system_pods.go:61] "kindnet-flmgm" [bb04d59b-0f2b-44db-bbd1-53d35a0d1406] Running
	I1002 21:53:07.289212 1183602 system_pods.go:61] "kube-apiserver-no-preload-661954" [73de157f-6464-4c1a-8de1-87a4fff66c68] Running
	I1002 21:53:07.289217 1183602 system_pods.go:61] "kube-controller-manager-no-preload-661954" [eda68d1f-9389-46d7-88c2-3f2017805cf0] Running
	I1002 21:53:07.289221 1183602 system_pods.go:61] "kube-proxy-5jstv" [51774e0f-371e-4a31-801f-9ca681eefe74] Running
	I1002 21:53:07.289226 1183602 system_pods.go:61] "kube-scheduler-no-preload-661954" [2d738e6b-41aa-46db-b8a7-3d4a2e34487a] Running
	I1002 21:53:07.289232 1183602 system_pods.go:61] "storage-provisioner" [862f39f9-ad9b-4268-86f6-775e9221224b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:53:07.289237 1183602 system_pods.go:74] duration metric: took 6.509055ms to wait for pod list to return data ...
	I1002 21:53:07.289244 1183602 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:53:07.292258 1183602 default_sa.go:45] found service account: "default"
	I1002 21:53:07.292278 1183602 default_sa.go:55] duration metric: took 3.028605ms for default service account to be created ...
	I1002 21:53:07.292295 1183602 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:53:07.296746 1183602 system_pods.go:86] 8 kube-system pods found
	I1002 21:53:07.296777 1183602 system_pods.go:89] "coredns-66bc5c9577-ddsr2" [af6a1936-ec5c-4c31-9f22-73cc6f7042c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:53:07.296784 1183602 system_pods.go:89] "etcd-no-preload-661954" [496cecf9-be4d-453d-aee5-11ec78563118] Running
	I1002 21:53:07.296790 1183602 system_pods.go:89] "kindnet-flmgm" [bb04d59b-0f2b-44db-bbd1-53d35a0d1406] Running
	I1002 21:53:07.296794 1183602 system_pods.go:89] "kube-apiserver-no-preload-661954" [73de157f-6464-4c1a-8de1-87a4fff66c68] Running
	I1002 21:53:07.296813 1183602 system_pods.go:89] "kube-controller-manager-no-preload-661954" [eda68d1f-9389-46d7-88c2-3f2017805cf0] Running
	I1002 21:53:07.296817 1183602 system_pods.go:89] "kube-proxy-5jstv" [51774e0f-371e-4a31-801f-9ca681eefe74] Running
	I1002 21:53:07.296821 1183602 system_pods.go:89] "kube-scheduler-no-preload-661954" [2d738e6b-41aa-46db-b8a7-3d4a2e34487a] Running
	I1002 21:53:07.296827 1183602 system_pods.go:89] "storage-provisioner" [862f39f9-ad9b-4268-86f6-775e9221224b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:53:07.296852 1183602 retry.go:31] will retry after 249.716999ms: missing components: kube-dns
	I1002 21:53:07.568089 1183602 system_pods.go:86] 8 kube-system pods found
	I1002 21:53:07.568123 1183602 system_pods.go:89] "coredns-66bc5c9577-ddsr2" [af6a1936-ec5c-4c31-9f22-73cc6f7042c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:53:07.568130 1183602 system_pods.go:89] "etcd-no-preload-661954" [496cecf9-be4d-453d-aee5-11ec78563118] Running
	I1002 21:53:07.568136 1183602 system_pods.go:89] "kindnet-flmgm" [bb04d59b-0f2b-44db-bbd1-53d35a0d1406] Running
	I1002 21:53:07.568140 1183602 system_pods.go:89] "kube-apiserver-no-preload-661954" [73de157f-6464-4c1a-8de1-87a4fff66c68] Running
	I1002 21:53:07.568144 1183602 system_pods.go:89] "kube-controller-manager-no-preload-661954" [eda68d1f-9389-46d7-88c2-3f2017805cf0] Running
	I1002 21:53:07.568149 1183602 system_pods.go:89] "kube-proxy-5jstv" [51774e0f-371e-4a31-801f-9ca681eefe74] Running
	I1002 21:53:07.568152 1183602 system_pods.go:89] "kube-scheduler-no-preload-661954" [2d738e6b-41aa-46db-b8a7-3d4a2e34487a] Running
	I1002 21:53:07.568158 1183602 system_pods.go:89] "storage-provisioner" [862f39f9-ad9b-4268-86f6-775e9221224b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:53:07.568171 1183602 retry.go:31] will retry after 366.003776ms: missing components: kube-dns
	I1002 21:53:07.937886 1183602 system_pods.go:86] 8 kube-system pods found
	I1002 21:53:07.937919 1183602 system_pods.go:89] "coredns-66bc5c9577-ddsr2" [af6a1936-ec5c-4c31-9f22-73cc6f7042c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:53:07.937928 1183602 system_pods.go:89] "etcd-no-preload-661954" [496cecf9-be4d-453d-aee5-11ec78563118] Running
	I1002 21:53:07.937934 1183602 system_pods.go:89] "kindnet-flmgm" [bb04d59b-0f2b-44db-bbd1-53d35a0d1406] Running
	I1002 21:53:07.937943 1183602 system_pods.go:89] "kube-apiserver-no-preload-661954" [73de157f-6464-4c1a-8de1-87a4fff66c68] Running
	I1002 21:53:07.937948 1183602 system_pods.go:89] "kube-controller-manager-no-preload-661954" [eda68d1f-9389-46d7-88c2-3f2017805cf0] Running
	I1002 21:53:07.937951 1183602 system_pods.go:89] "kube-proxy-5jstv" [51774e0f-371e-4a31-801f-9ca681eefe74] Running
	I1002 21:53:07.937955 1183602 system_pods.go:89] "kube-scheduler-no-preload-661954" [2d738e6b-41aa-46db-b8a7-3d4a2e34487a] Running
	I1002 21:53:07.937959 1183602 system_pods.go:89] "storage-provisioner" [862f39f9-ad9b-4268-86f6-775e9221224b] Running
	I1002 21:53:07.937965 1183602 system_pods.go:126] duration metric: took 645.652918ms to wait for k8s-apps to be running ...
	I1002 21:53:07.937972 1183602 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:53:07.938025 1183602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:53:07.964986 1183602 system_svc.go:56] duration metric: took 27.002809ms WaitForService to wait for kubelet
	I1002 21:53:07.965015 1183602 kubeadm.go:586] duration metric: took 15.657867483s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:53:07.965034 1183602 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:53:07.968681 1183602 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:53:07.968724 1183602 node_conditions.go:123] node cpu capacity is 2
	I1002 21:53:07.968738 1183602 node_conditions.go:105] duration metric: took 3.698208ms to run NodePressure ...
	I1002 21:53:07.968750 1183602 start.go:242] waiting for startup goroutines ...
	I1002 21:53:07.968757 1183602 start.go:247] waiting for cluster config update ...
	I1002 21:53:07.968778 1183602 start.go:256] writing updated cluster config ...
	I1002 21:53:07.969107 1183602 ssh_runner.go:195] Run: rm -f paused
	I1002 21:53:07.974333 1183602 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:53:07.978302 1183602 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ddsr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:53:08.984045 1183602 pod_ready.go:94] pod "coredns-66bc5c9577-ddsr2" is "Ready"
	I1002 21:53:08.984090 1183602 pod_ready.go:86] duration metric: took 1.005756369s for pod "coredns-66bc5c9577-ddsr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:53:08.986979 1183602 pod_ready.go:83] waiting for pod "etcd-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:53:08.992308 1183602 pod_ready.go:94] pod "etcd-no-preload-661954" is "Ready"
	I1002 21:53:08.992335 1183602 pod_ready.go:86] duration metric: took 5.331281ms for pod "etcd-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:53:08.995951 1183602 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:53:09.004262 1183602 pod_ready.go:94] pod "kube-apiserver-no-preload-661954" is "Ready"
	I1002 21:53:09.004288 1183602 pod_ready.go:86] duration metric: took 8.313988ms for pod "kube-apiserver-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:53:09.014906 1183602 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:53:09.183410 1183602 pod_ready.go:94] pod "kube-controller-manager-no-preload-661954" is "Ready"
	I1002 21:53:09.183433 1183602 pod_ready.go:86] duration metric: took 168.451592ms for pod "kube-controller-manager-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:53:09.382361 1183602 pod_ready.go:83] waiting for pod "kube-proxy-5jstv" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:53:09.782150 1183602 pod_ready.go:94] pod "kube-proxy-5jstv" is "Ready"
	I1002 21:53:09.782180 1183602 pod_ready.go:86] duration metric: took 399.787721ms for pod "kube-proxy-5jstv" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.977955847Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.981516492Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.981669276Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.981743095Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.985720295Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.985873366Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.985947194Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.991887845Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.99195556Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.991980093Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.998312324Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:52:53 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:53.998480787Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.46608932Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=47cc7af1-dbb3-4a82-bf50-b4f35103e2fb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.467067526Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=27343265-0362-4b19-a54c-6de2fa2781ad name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.468225781Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl/dashboard-metrics-scraper" id=a3fd7157-d11b-408f-b96b-81a3d81c2cd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.468457775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.477249699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.477899085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.495145498Z" level=info msg="Created container 1b693f68ee8dd6e2309e37f4197f27094d4dfaa1d9c6f3ba3d22a7358a840ba1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl/dashboard-metrics-scraper" id=a3fd7157-d11b-408f-b96b-81a3d81c2cd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.495914897Z" level=info msg="Starting container: 1b693f68ee8dd6e2309e37f4197f27094d4dfaa1d9c6f3ba3d22a7358a840ba1" id=95c00582-4d99-421c-b024-ab40f2a081bf name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.497391806Z" level=info msg="Started container" PID=1705 containerID=1b693f68ee8dd6e2309e37f4197f27094d4dfaa1d9c6f3ba3d22a7358a840ba1 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl/dashboard-metrics-scraper id=95c00582-4d99-421c-b024-ab40f2a081bf name=/runtime.v1.RuntimeService/StartContainer sandboxID=f438f6485c9e8b0d2fae63954c0e24eb4ebac02811c2380e16dc455846c1c698
	Oct 02 21:52:57 old-k8s-version-714101 conmon[1703]: conmon 1b693f68ee8dd6e2309e <ninfo>: container 1705 exited with status 1
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.740030413Z" level=info msg="Removing container: 1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c" id=70527afd-cfbe-4eff-824e-abedb0c4f1e3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.754621872Z" level=info msg="Error loading conmon cgroup of container 1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c: cgroup deleted" id=70527afd-cfbe-4eff-824e-abedb0c4f1e3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 21:52:57 old-k8s-version-714101 crio[650]: time="2025-10-02T21:52:57.761056706Z" level=info msg="Removed container 1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl/dashboard-metrics-scraper" id=70527afd-cfbe-4eff-824e-abedb0c4f1e3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	1b693f68ee8dd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago       Exited              dashboard-metrics-scraper   2                   f438f6485c9e8       dashboard-metrics-scraper-5f989dc9cf-b8gtl       kubernetes-dashboard
	dd0b9cee0f7e3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   e8d95ce22e87e       storage-provisioner                              kube-system
	a877467241334       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   baf83198bfb1e       kubernetes-dashboard-8694d4445c-m6s5z            kubernetes-dashboard
	5df3b4c4cd17f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           57 seconds ago       Running             coredns                     1                   bef6ae50ff6f1       coredns-5dd5756b68-f7qdk                         kube-system
	4cc075aa1f2df       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   a53de98d142aa       busybox                                          default
	6ede589ba5dbe       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   51c5ec2ff84df       kindnet-qgs2b                                    kube-system
	b46c6d49eaea1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   e8d95ce22e87e       storage-provisioner                              kube-system
	9af8e58137628       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           57 seconds ago       Running             kube-proxy                  1                   32a6007e37f89       kube-proxy-9ktm4                                 kube-system
	5a83b3b5fdd18       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   41bd58323735e       kube-apiserver-old-k8s-version-714101            kube-system
	c3aedcafe119f       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   f2b51ae40d874       kube-scheduler-old-k8s-version-714101            kube-system
	3c684fbd5a7c3       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   bf2a8bfaa7b38       kube-controller-manager-old-k8s-version-714101   kube-system
	d36efcb47e31c       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   14046e7067387       etcd-old-k8s-version-714101                      kube-system
	
	
	==> coredns [5df3b4c4cd17f23a02364ab3315805996102ae0b9cc8eaa6c40c3633d6493a30] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41909 - 57219 "HINFO IN 7321415675562643261.1914517515117912143. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013445766s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-714101
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-714101
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=old-k8s-version-714101
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_51_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:50:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-714101
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:53:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:52:43 +0000   Thu, 02 Oct 2025 21:50:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:52:43 +0000   Thu, 02 Oct 2025 21:50:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:52:43 +0000   Thu, 02 Oct 2025 21:50:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:52:43 +0000   Thu, 02 Oct 2025 21:51:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-714101
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 67eee4e12caa4c1d823624a8d719cd18
	  System UUID:                fd388e5a-8f2f-4643-a470-d71d3d179fee
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-5dd5756b68-f7qdk                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     116s
	  kube-system                 etcd-old-k8s-version-714101                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m8s
	  kube-system                 kindnet-qgs2b                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-old-k8s-version-714101             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-controller-manager-old-k8s-version-714101    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-proxy-9ktm4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-old-k8s-version-714101             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-b8gtl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-m6s5z             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 114s                   kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 2m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node old-k8s-version-714101 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s (x8 over 2m16s)  kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m8s                   kubelet          Node old-k8s-version-714101 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m8s                   kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m8s                   kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m8s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           117s                   node-controller  Node old-k8s-version-714101 event: Registered Node old-k8s-version-714101 in Controller
	  Normal  NodeReady                102s                   kubelet          Node old-k8s-version-714101 status is now: NodeReady
	  Normal  Starting                 66s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  66s (x8 over 66s)      kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    66s (x8 over 66s)      kubelet          Node old-k8s-version-714101 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     66s (x8 over 66s)      kubelet          Node old-k8s-version-714101 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                    node-controller  Node old-k8s-version-714101 event: Registered Node old-k8s-version-714101 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:15] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:18] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:21] overlayfs: idmapped layers are currently not supported
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +39.734560] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[  +6.128952] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d36efcb47e31c46319889a12e9d7ca6fbd109d48b8e4129bed48fb9a558f3d81] <==
	{"level":"info","ts":"2025-10-02T21:52:06.724316Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T21:52:06.724336Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T21:52:06.724526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-02T21:52:06.724582Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-02T21:52:06.724646Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T21:52:06.72467Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T21:52:06.733464Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-02T21:52:06.7335Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-02T21:52:06.726022Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-02T21:52:06.733681Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-02T21:52:06.733701Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-02T21:52:08.020834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-02T21:52:08.020962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-02T21:52:08.021016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-02T21:52:08.02107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-02T21:52:08.021103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-02T21:52:08.021158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-02T21:52:08.021196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-02T21:52:08.027364Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-714101 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-02T21:52:08.02758Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T21:52:08.028283Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-02T21:52:08.028386Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-02T21:52:08.028436Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T21:52:08.029364Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-02T21:52:08.066915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:53:11 up  6:35,  0 user,  load average: 4.44, 2.37, 1.75
	Linux old-k8s-version-714101 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ede589ba5dbe96da518cad779870f082e644a9d501eb5455c06ab59e4fb8bd8] <==
	I1002 21:52:13.757317       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:52:13.758307       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 21:52:13.758438       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:52:13.758449       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:52:13.758462       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:52:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:52:13.967177       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:52:14.011475       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:52:14.011585       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:52:14.011771       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:52:43.968872       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:52:43.970509       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 21:52:43.970716       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 21:52:43.983166       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 21:52:45.612613       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:52:45.612657       1 metrics.go:72] Registering metrics
	I1002 21:52:45.612711       1 controller.go:711] "Syncing nftables rules"
	I1002 21:52:53.967461       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:52:53.968565       1 main.go:301] handling current node
	I1002 21:53:03.972153       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:53:03.972187       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5a83b3b5fdd1829410dd940329aebf65d78fdcbe04a70857ada30dd577e810fe] <==
	I1002 21:52:12.592749       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1002 21:52:12.595995       1 aggregator.go:166] initial CRD sync complete...
	I1002 21:52:12.596051       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 21:52:12.596080       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 21:52:12.596113       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:52:12.596328       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 21:52:12.597168       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 21:52:12.657054       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:52:12.695676       1 trace.go:236] Trace[1711679665]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:50fc29fa-fba4-4c93-b619-e3ec5b5aae13,client:192.168.76.2,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (02-Oct-2025 21:52:12.090) (total time: 605ms):
	Trace[1711679665]: ---"Write to database call failed" len:4139,err:nodes "old-k8s-version-714101" already exists 104ms (21:52:12.695)
	Trace[1711679665]: [605.564491ms] [605.564491ms] END
	I1002 21:52:12.729571       1 trace.go:236] Trace[532147769]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:6b91340a-bf34-46eb-81a3-b6dfb97715a8,client:192.168.76.2,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (02-Oct-2025 21:52:12.090) (total time: 638ms):
	Trace[532147769]: [638.660085ms] [638.660085ms] END
	E1002 21:52:12.916611       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 21:52:13.043549       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:52:16.283178       1 controller.go:624] quota admission added evaluator for: namespaces
	I1002 21:52:16.369980       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 21:52:16.409597       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:52:16.421067       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:52:16.434982       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 21:52:16.487984       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.182.0"}
	I1002 21:52:16.508512       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.16.127"}
	I1002 21:52:26.689769       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:52:26.694357       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1002 21:52:27.059903       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3c684fbd5a7c318f98ee71b7292da5395690fd5e263ff9214c18a7732d8adeeb] <==
	I1002 21:52:26.755205       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 21:52:26.799653       1 shared_informer.go:318] Caches are synced for disruption
	I1002 21:52:26.827330       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="140.752249ms"
	I1002 21:52:26.827781       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.431µs"
	I1002 21:52:26.847155       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-b8gtl"
	I1002 21:52:26.847257       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-m6s5z"
	I1002 21:52:26.870801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="157.64732ms"
	I1002 21:52:26.883122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="160.44092ms"
	I1002 21:52:26.892260       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="21.316959ms"
	I1002 21:52:26.893178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="38.038µs"
	I1002 21:52:26.966085       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="82.84844ms"
	I1002 21:52:26.977324       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.964µs"
	I1002 21:52:27.026145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.943895ms"
	I1002 21:52:27.026315       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="65.516µs"
	I1002 21:52:27.165601       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 21:52:27.165699       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1002 21:52:27.211921       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 21:52:34.737153       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="31.889949ms"
	I1002 21:52:34.737271       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.912µs"
	I1002 21:52:40.716877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.57µs"
	I1002 21:52:41.723626       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.958µs"
	I1002 21:52:42.718737       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.292µs"
	I1002 21:52:51.008297       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.54547ms"
	I1002 21:52:51.009882       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.039µs"
	I1002 21:52:57.762483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.136µs"
	
	
	==> kube-proxy [9af8e581376283afb448db698b74533910e6b9797b8fec4999d5b566a07a942a] <==
	I1002 21:52:15.054412       1 server_others.go:69] "Using iptables proxy"
	I1002 21:52:15.145263       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1002 21:52:15.950629       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:52:16.070341       1 server_others.go:152] "Using iptables Proxier"
	I1002 21:52:16.070388       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 21:52:16.070397       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 21:52:16.070426       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 21:52:16.070649       1 server.go:846] "Version info" version="v1.28.0"
	I1002 21:52:16.070667       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:52:16.075550       1 config.go:188] "Starting service config controller"
	I1002 21:52:16.075579       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 21:52:16.075603       1 config.go:97] "Starting endpoint slice config controller"
	I1002 21:52:16.075607       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 21:52:16.084512       1 config.go:315] "Starting node config controller"
	I1002 21:52:16.084541       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 21:52:16.177174       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 21:52:16.177223       1 shared_informer.go:318] Caches are synced for service config
	I1002 21:52:16.186748       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c3aedcafe119fc8bb499f43ec599913cb04664925ad686de580218aff808d473] <==
	I1002 21:52:09.791079       1 serving.go:348] Generated self-signed cert in-memory
	I1002 21:52:15.925260       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1002 21:52:15.925294       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:52:15.936226       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 21:52:15.936431       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1002 21:52:15.936449       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1002 21:52:15.936463       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 21:52:15.979364       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:52:15.979410       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 21:52:15.986430       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:52:15.986465       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 21:52:16.057203       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1002 21:52:16.092038       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 21:52:16.094274       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 21:52:20 old-k8s-version-714101 kubelet[776]: I1002 21:52:20.972215     776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 21:52:26 old-k8s-version-714101 kubelet[776]: I1002 21:52:26.861315     776 topology_manager.go:215] "Topology Admit Handler" podUID="f3377233-589f-43c3-8135-33c09c2b7651" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-m6s5z"
	Oct 02 21:52:26 old-k8s-version-714101 kubelet[776]: I1002 21:52:26.880061     776 topology_manager.go:215] "Topology Admit Handler" podUID="34ef4770-262b-49f2-848d-505bea074a2b" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-b8gtl"
	Oct 02 21:52:26 old-k8s-version-714101 kubelet[776]: I1002 21:52:26.921488     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f3377233-589f-43c3-8135-33c09c2b7651-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-m6s5z\" (UID: \"f3377233-589f-43c3-8135-33c09c2b7651\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-m6s5z"
	Oct 02 21:52:26 old-k8s-version-714101 kubelet[776]: I1002 21:52:26.921563     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktzln\" (UniqueName: \"kubernetes.io/projected/f3377233-589f-43c3-8135-33c09c2b7651-kube-api-access-ktzln\") pod \"kubernetes-dashboard-8694d4445c-m6s5z\" (UID: \"f3377233-589f-43c3-8135-33c09c2b7651\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-m6s5z"
	Oct 02 21:52:27 old-k8s-version-714101 kubelet[776]: I1002 21:52:27.022672     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/34ef4770-262b-49f2-848d-505bea074a2b-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-b8gtl\" (UID: \"34ef4770-262b-49f2-848d-505bea074a2b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl"
	Oct 02 21:52:27 old-k8s-version-714101 kubelet[776]: I1002 21:52:27.022744     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-557fj\" (UniqueName: \"kubernetes.io/projected/34ef4770-262b-49f2-848d-505bea074a2b-kube-api-access-557fj\") pod \"dashboard-metrics-scraper-5f989dc9cf-b8gtl\" (UID: \"34ef4770-262b-49f2-848d-505bea074a2b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl"
	Oct 02 21:52:27 old-k8s-version-714101 kubelet[776]: W1002 21:52:27.246361     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e7b0b66ac30c8142e514bee9004ed8c26ea5e57c22de933618d11b2cb28adc67/crio-f438f6485c9e8b0d2fae63954c0e24eb4ebac02811c2380e16dc455846c1c698 WatchSource:0}: Error finding container f438f6485c9e8b0d2fae63954c0e24eb4ebac02811c2380e16dc455846c1c698: Status 404 returned error can't find the container with id f438f6485c9e8b0d2fae63954c0e24eb4ebac02811c2380e16dc455846c1c698
	Oct 02 21:52:40 old-k8s-version-714101 kubelet[776]: I1002 21:52:40.688066     776 scope.go:117] "RemoveContainer" containerID="82981fca6afbb9af1675da32301aa1dd945ad912d451ce0363a73ca0b4587bae"
	Oct 02 21:52:40 old-k8s-version-714101 kubelet[776]: I1002 21:52:40.714223     776 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-m6s5z" podStartSLOduration=7.825599074 podCreationTimestamp="2025-10-02 21:52:26 +0000 UTC" firstStartedPulling="2025-10-02 21:52:27.255143458 +0000 UTC m=+21.968091278" lastFinishedPulling="2025-10-02 21:52:34.143012397 +0000 UTC m=+28.855960225" observedRunningTime="2025-10-02 21:52:34.703135599 +0000 UTC m=+29.416083427" watchObservedRunningTime="2025-10-02 21:52:40.713468021 +0000 UTC m=+35.426415849"
	Oct 02 21:52:41 old-k8s-version-714101 kubelet[776]: I1002 21:52:41.692067     776 scope.go:117] "RemoveContainer" containerID="1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c"
	Oct 02 21:52:41 old-k8s-version-714101 kubelet[776]: I1002 21:52:41.693175     776 scope.go:117] "RemoveContainer" containerID="82981fca6afbb9af1675da32301aa1dd945ad912d451ce0363a73ca0b4587bae"
	Oct 02 21:52:41 old-k8s-version-714101 kubelet[776]: E1002 21:52:41.706722     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8gtl_kubernetes-dashboard(34ef4770-262b-49f2-848d-505bea074a2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl" podUID="34ef4770-262b-49f2-848d-505bea074a2b"
	Oct 02 21:52:42 old-k8s-version-714101 kubelet[776]: I1002 21:52:42.695855     776 scope.go:117] "RemoveContainer" containerID="1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c"
	Oct 02 21:52:42 old-k8s-version-714101 kubelet[776]: E1002 21:52:42.696587     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8gtl_kubernetes-dashboard(34ef4770-262b-49f2-848d-505bea074a2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl" podUID="34ef4770-262b-49f2-848d-505bea074a2b"
	Oct 02 21:52:45 old-k8s-version-714101 kubelet[776]: I1002 21:52:45.703675     776 scope.go:117] "RemoveContainer" containerID="b46c6d49eaea1a24cb1c3f40313daa77b388cfaf594a9dca66a287e6673de560"
	Oct 02 21:52:47 old-k8s-version-714101 kubelet[776]: I1002 21:52:47.183498     776 scope.go:117] "RemoveContainer" containerID="1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c"
	Oct 02 21:52:47 old-k8s-version-714101 kubelet[776]: E1002 21:52:47.183865     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8gtl_kubernetes-dashboard(34ef4770-262b-49f2-848d-505bea074a2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl" podUID="34ef4770-262b-49f2-848d-505bea074a2b"
	Oct 02 21:52:57 old-k8s-version-714101 kubelet[776]: I1002 21:52:57.465020     776 scope.go:117] "RemoveContainer" containerID="1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c"
	Oct 02 21:52:57 old-k8s-version-714101 kubelet[776]: I1002 21:52:57.737248     776 scope.go:117] "RemoveContainer" containerID="1c0e69e87858500cfcdb1f6c762be5a227015666e24ccc5e8cb51d664b8bef9c"
	Oct 02 21:52:57 old-k8s-version-714101 kubelet[776]: I1002 21:52:57.737716     776 scope.go:117] "RemoveContainer" containerID="1b693f68ee8dd6e2309e37f4197f27094d4dfaa1d9c6f3ba3d22a7358a840ba1"
	Oct 02 21:52:57 old-k8s-version-714101 kubelet[776]: E1002 21:52:57.738171     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8gtl_kubernetes-dashboard(34ef4770-262b-49f2-848d-505bea074a2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8gtl" podUID="34ef4770-262b-49f2-848d-505bea074a2b"
	Oct 02 21:53:05 old-k8s-version-714101 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 21:53:05 old-k8s-version-714101 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 21:53:05 old-k8s-version-714101 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a87746724133411149d8880820420b63e43012165b82087926e62ccf4703f7e5] <==
	2025/10/02 21:52:34 Using namespace: kubernetes-dashboard
	2025/10/02 21:52:34 Using in-cluster config to connect to apiserver
	2025/10/02 21:52:34 Using secret token for csrf signing
	2025/10/02 21:52:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 21:52:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 21:52:34 Successful initial request to the apiserver, version: v1.28.0
	2025/10/02 21:52:34 Generating JWE encryption key
	2025/10/02 21:52:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 21:52:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 21:52:35 Initializing JWE encryption key from synchronized object
	2025/10/02 21:52:35 Creating in-cluster Sidecar client
	2025/10/02 21:52:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 21:52:35 Serving insecurely on HTTP port: 9090
	2025/10/02 21:53:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 21:52:34 Starting overwatch
	
	
	==> storage-provisioner [b46c6d49eaea1a24cb1c3f40313daa77b388cfaf594a9dca66a287e6673de560] <==
	I1002 21:52:14.722934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 21:52:44.724967       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [dd0b9cee0f7e3c89c78675db4423007b357320668c81cbc1fdc6e2b580b171de] <==
	I1002 21:52:45.781591       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 21:52:45.802713       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 21:52:45.802869       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 21:53:03.206742       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:53:03.206876       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89d3a326-7957-4e4b-8a32-0337fd7fbaa5", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-714101_0fe09646-4bb0-4e8b-8df2-8be4624cef47 became leader
	I1002 21:53:03.207579       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-714101_0fe09646-4bb0-4e8b-8df2-8be4624cef47!
	I1002 21:53:03.308258       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-714101_0fe09646-4bb0-4e8b-8df2-8be4624cef47!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-714101 -n old-k8s-version-714101
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-714101 -n old-k8s-version-714101: exit status 2 (384.101825ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-714101 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-661954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-661954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (300.95384ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:53:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-661954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-661954 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-661954 describe deploy/metrics-server -n kube-system: exit status 1 (116.228838ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-661954 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-661954
helpers_test.go:243: (dbg) docker inspect no-preload-661954:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135",
	        "Created": "2025-10-02T21:52:01.48084196Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1183989,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:52:01.562941274Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135/hostname",
	        "HostsPath": "/var/lib/docker/containers/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135/hosts",
	        "LogPath": "/var/lib/docker/containers/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135-json.log",
	        "Name": "/no-preload-661954",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-661954:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-661954",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135",
	                "LowerDir": "/var/lib/docker/overlay2/79243a034d5e9dc6634af037ff52f641df0196883b405891b543a79ac513554b-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/79243a034d5e9dc6634af037ff52f641df0196883b405891b543a79ac513554b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/79243a034d5e9dc6634af037ff52f641df0196883b405891b543a79ac513554b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/79243a034d5e9dc6634af037ff52f641df0196883b405891b543a79ac513554b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-661954",
	                "Source": "/var/lib/docker/volumes/no-preload-661954/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-661954",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-661954",
	                "name.minikube.sigs.k8s.io": "no-preload-661954",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0e5b55f24332e5679820cfb70aa3d091860eef784369a1d4ef0801b6078b517e",
	            "SandboxKey": "/var/run/docker/netns/0e5b55f24332",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34191"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34192"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34195"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34193"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34194"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-661954": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:ee:94:e0:98:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "76a20fa3488da5dc1336dc065545b6cee383a338650fbc63c52f9a29c8b4abb9",
	                    "EndpointID": "78d44daef2cd8a9a0aabb2aae85cc9e4a84bab0b591ae1298c45187d5b03de3f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-661954",
	                        "f3d778675684"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-661954 -n no-preload-661954
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-661954 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-661954 logs -n 25: (1.827112712s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-644857 sudo crio config                                                                                                                                                                                                             │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ delete  │ -p cilium-644857                                                                                                                                                                                                                              │ cilium-644857             │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │ 02 Oct 25 21:41 UTC │
	│ start   │ -p force-systemd-env-916563 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-916563  │ jenkins │ v1.37.0 │ 02 Oct 25 21:41 UTC │                     │
	│ ssh     │ force-systemd-flag-987043 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-987043 │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	│ delete  │ -p force-systemd-flag-987043                                                                                                                                                                                                                  │ force-systemd-flag-987043 │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	│ start   │ -p cert-expiration-955864 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-955864    │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	│ delete  │ -p force-systemd-env-916563                                                                                                                                                                                                                   │ force-systemd-env-916563  │ jenkins │ v1.37.0 │ 02 Oct 25 21:49 UTC │ 02 Oct 25 21:49 UTC │
	│ start   │ -p cert-options-769461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:49 UTC │ 02 Oct 25 21:50 UTC │
	│ ssh     │ cert-options-769461 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ ssh     │ -p cert-options-769461 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ delete  │ -p cert-options-769461                                                                                                                                                                                                                        │ cert-options-769461       │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ start   │ -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p cert-expiration-955864 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-955864    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-714101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │                     │
	│ stop    │ -p old-k8s-version-714101 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-714101 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:52 UTC │
	│ delete  │ -p cert-expiration-955864                                                                                                                                                                                                                     │ cert-expiration-955864    │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-661954         │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:53 UTC │
	│ image   │ old-k8s-version-714101 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ pause   │ -p old-k8s-version-714101 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	│ delete  │ -p old-k8s-version-714101                                                                                                                                                                                                                     │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ delete  │ -p old-k8s-version-714101                                                                                                                                                                                                                     │ old-k8s-version-714101    │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977        │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-661954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-661954         │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:53:15
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:53:15.100955 1189833 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:53:15.101126 1189833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:53:15.101141 1189833 out.go:374] Setting ErrFile to fd 2...
	I1002 21:53:15.101147 1189833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:53:15.101435 1189833 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:53:15.101904 1189833 out.go:368] Setting JSON to false
	I1002 21:53:15.102907 1189833 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23732,"bootTime":1759418263,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:53:15.102990 1189833 start.go:140] virtualization:  
	I1002 21:53:15.107125 1189833 out.go:179] * [embed-certs-132977] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:53:15.111481 1189833 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:53:15.111650 1189833 notify.go:221] Checking for updates...
	I1002 21:53:15.118440 1189833 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:53:15.121536 1189833 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:53:15.124584 1189833 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:53:15.127834 1189833 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:53:15.130844 1189833 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:53:15.134519 1189833 config.go:182] Loaded profile config "no-preload-661954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:53:15.134662 1189833 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:53:15.157924 1189833 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:53:15.158096 1189833 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:53:15.218760 1189833 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:53:15.208767147 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:53:15.218880 1189833 docker.go:319] overlay module found
	I1002 21:53:15.222178 1189833 out.go:179] * Using the docker driver based on user configuration
	I1002 21:53:15.225109 1189833 start.go:306] selected driver: docker
	I1002 21:53:15.225135 1189833 start.go:936] validating driver "docker" against <nil>
	I1002 21:53:15.225148 1189833 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:53:15.225950 1189833 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:53:15.290715 1189833 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:53:15.280580475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:53:15.290893 1189833 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:53:15.291133 1189833 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:53:15.294231 1189833 out.go:179] * Using Docker driver with root privileges
	I1002 21:53:15.297184 1189833 cni.go:84] Creating CNI manager for ""
	I1002 21:53:15.297271 1189833 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:53:15.297285 1189833 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:53:15.297374 1189833 start.go:350] cluster config:
	{Name:embed-certs-132977 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-132977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:53:15.300598 1189833 out.go:179] * Starting "embed-certs-132977" primary control-plane node in "embed-certs-132977" cluster
	I1002 21:53:15.304041 1189833 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:53:15.307138 1189833 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:53:15.309991 1189833 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:53:15.310076 1189833 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:53:15.310091 1189833 cache.go:59] Caching tarball of preloaded images
	I1002 21:53:15.310172 1189833 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:53:15.310458 1189833 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:53:15.310477 1189833 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:53:15.310660 1189833 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/config.json ...
	I1002 21:53:15.310701 1189833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/config.json: {Name:mk34aaf089afab1a38df01d21f9fd301749514a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:15.330360 1189833 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:53:15.330384 1189833 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:53:15.330402 1189833 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:53:15.330425 1189833 start.go:361] acquireMachinesLock for embed-certs-132977: {Name:mkeaddb5abf9563079c0434ecbd0586026902019 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:15.330547 1189833 start.go:365] duration metric: took 103.759µs to acquireMachinesLock for "embed-certs-132977"
	I1002 21:53:15.330572 1189833 start.go:94] Provisioning new machine with config: &{Name:embed-certs-132977 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-132977 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:53:15.330655 1189833 start.go:126] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 02 21:53:07 no-preload-661954 crio[837]: time="2025-10-02T21:53:07.520592124Z" level=info msg="Created container 927a578dadfe689001f8965dfb1d623fb8713bdddcffad17e9f1c27a36e3ae31: kube-system/coredns-66bc5c9577-ddsr2/coredns" id=f6244308-c488-44ef-9a2d-3290a301c51d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:07 no-preload-661954 crio[837]: time="2025-10-02T21:53:07.523597362Z" level=info msg="Starting container: 927a578dadfe689001f8965dfb1d623fb8713bdddcffad17e9f1c27a36e3ae31" id=641e240e-57ad-45cd-a7ea-fb5ffd402e22 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:53:07 no-preload-661954 crio[837]: time="2025-10-02T21:53:07.534227941Z" level=info msg="Started container" PID=2473 containerID=927a578dadfe689001f8965dfb1d623fb8713bdddcffad17e9f1c27a36e3ae31 description=kube-system/coredns-66bc5c9577-ddsr2/coredns id=641e240e-57ad-45cd-a7ea-fb5ffd402e22 name=/runtime.v1.RuntimeService/StartContainer sandboxID=40c790c55cc65562018f1961c5a4eba71f8cd33f6ab186d07f5a648aaeaeaf8d
	Oct 02 21:53:11 no-preload-661954 crio[837]: time="2025-10-02T21:53:11.032165805Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0da5e29f-4713-4019-8519-ca73521e7652 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:53:11 no-preload-661954 crio[837]: time="2025-10-02T21:53:11.032241545Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:53:11 no-preload-661954 crio[837]: time="2025-10-02T21:53:11.046625141Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:10e576b6fb1675f8fa4a34ff48008b563c77d5dbe6d200496c7110c1c9c18ff4 UID:2f75d586-1180-436a-8778-b22230b1b890 NetNS:/var/run/netns/aaa163f0-84fb-4e85-ad32-6685610d0911 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004edc20}] Aliases:map[]}"
	Oct 02 21:53:11 no-preload-661954 crio[837]: time="2025-10-02T21:53:11.046840897Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 02 21:53:11 no-preload-661954 crio[837]: time="2025-10-02T21:53:11.063357459Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:10e576b6fb1675f8fa4a34ff48008b563c77d5dbe6d200496c7110c1c9c18ff4 UID:2f75d586-1180-436a-8778-b22230b1b890 NetNS:/var/run/netns/aaa163f0-84fb-4e85-ad32-6685610d0911 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004edc20}] Aliases:map[]}"
	Oct 02 21:53:11 no-preload-661954 crio[837]: time="2025-10-02T21:53:11.063706914Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 02 21:53:11 no-preload-661954 crio[837]: time="2025-10-02T21:53:11.075873568Z" level=info msg="Ran pod sandbox 10e576b6fb1675f8fa4a34ff48008b563c77d5dbe6d200496c7110c1c9c18ff4 with infra container: default/busybox/POD" id=0da5e29f-4713-4019-8519-ca73521e7652 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:53:11 no-preload-661954 crio[837]: time="2025-10-02T21:53:11.078490885Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=991bca20-4689-4723-bece-ba84cbb7602b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:53:11 no-preload-661954 crio[837]: time="2025-10-02T21:53:11.078624666Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=991bca20-4689-4723-bece-ba84cbb7602b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:53:11 no-preload-661954 crio[837]: time="2025-10-02T21:53:11.078671098Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=991bca20-4689-4723-bece-ba84cbb7602b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:53:11 no-preload-661954 crio[837]: time="2025-10-02T21:53:11.080330738Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eef23fd5-8b86-4b39-8ed3-54984926e1e8 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:53:11 no-preload-661954 crio[837]: time="2025-10-02T21:53:11.094683944Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 02 21:53:13 no-preload-661954 crio[837]: time="2025-10-02T21:53:13.058473061Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=eef23fd5-8b86-4b39-8ed3-54984926e1e8 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:53:13 no-preload-661954 crio[837]: time="2025-10-02T21:53:13.059146881Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dc2e3b2e-f3cc-450c-9e15-e0ea516dea6a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:53:13 no-preload-661954 crio[837]: time="2025-10-02T21:53:13.061001602Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=80cee277-51dc-4faa-9ddb-fda0daee1439 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:53:13 no-preload-661954 crio[837]: time="2025-10-02T21:53:13.068688767Z" level=info msg="Creating container: default/busybox/busybox" id=dc3baa7d-5331-4a5b-9951-f625e9f3df60 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:13 no-preload-661954 crio[837]: time="2025-10-02T21:53:13.069441617Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:53:13 no-preload-661954 crio[837]: time="2025-10-02T21:53:13.077874215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:53:13 no-preload-661954 crio[837]: time="2025-10-02T21:53:13.078569541Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:53:13 no-preload-661954 crio[837]: time="2025-10-02T21:53:13.093680789Z" level=info msg="Created container 0b10a5a295fc443f7f82c3f9681d9982129f24dff429fd8cf5ef44da26c5badb: default/busybox/busybox" id=dc3baa7d-5331-4a5b-9951-f625e9f3df60 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:13 no-preload-661954 crio[837]: time="2025-10-02T21:53:13.094890399Z" level=info msg="Starting container: 0b10a5a295fc443f7f82c3f9681d9982129f24dff429fd8cf5ef44da26c5badb" id=f2b96dde-b556-4f8d-b918-538fa0dc10a3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:53:13 no-preload-661954 crio[837]: time="2025-10-02T21:53:13.097475274Z" level=info msg="Started container" PID=2526 containerID=0b10a5a295fc443f7f82c3f9681d9982129f24dff429fd8cf5ef44da26c5badb description=default/busybox/busybox id=f2b96dde-b556-4f8d-b918-538fa0dc10a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=10e576b6fb1675f8fa4a34ff48008b563c77d5dbe6d200496c7110c1c9c18ff4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0b10a5a295fc4       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   10e576b6fb167       busybox                                     default
	927a578dadfe6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   40c790c55cc65       coredns-66bc5c9577-ddsr2                    kube-system
	a95b190e85dee       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   80ae9aefc5046       storage-provisioner                         kube-system
	3f883e1e42343       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   f0eb0089d7954       kindnet-flmgm                               kube-system
	eea9240f2c5df       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      27 seconds ago      Running             kube-proxy                0                   0de137aa7dd85       kube-proxy-5jstv                            kube-system
	fb85df9c3527a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      41 seconds ago      Running             kube-apiserver            0                   187488203dce8       kube-apiserver-no-preload-661954            kube-system
	c322d89b788d5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      41 seconds ago      Running             kube-scheduler            0                   07273142ffed0       kube-scheduler-no-preload-661954            kube-system
	dfc912d2c5f02       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      41 seconds ago      Running             kube-controller-manager   0                   e5ccd41f1240e       kube-controller-manager-no-preload-661954   kube-system
	204daccf1940f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      41 seconds ago      Running             etcd                      0                   78f246f719d30       etcd-no-preload-661954                      kube-system
	
	
	==> coredns [927a578dadfe689001f8965dfb1d623fb8713bdddcffad17e9f1c27a36e3ae31] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44726 - 36829 "HINFO IN 6569248315470129144.660422213020488046. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011054224s
	
	
	==> describe nodes <==
	Name:               no-preload-661954
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-661954
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=no-preload-661954
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_52_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:52:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-661954
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:53:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:53:18 +0000   Thu, 02 Oct 2025 21:52:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:53:18 +0000   Thu, 02 Oct 2025 21:52:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:53:18 +0000   Thu, 02 Oct 2025 21:52:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:53:18 +0000   Thu, 02 Oct 2025 21:53:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-661954
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 38ff16d1ae5f49468d3d139f17c1281a
	  System UUID:                a884495e-b86e-4c01-a759-33d7d494f01d
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-ddsr2                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-661954                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-flmgm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-661954             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-661954    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-5jstv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-661954             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 26s   kube-proxy       
	  Normal   Starting                 34s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s   kubelet          Node no-preload-661954 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s   kubelet          Node no-preload-661954 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s   kubelet          Node no-preload-661954 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s   node-controller  Node no-preload-661954 event: Registered Node no-preload-661954 in Controller
	  Normal   NodeReady                15s   kubelet          Node no-preload-661954 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 21:15] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:18] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:21] overlayfs: idmapped layers are currently not supported
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +39.734560] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[  +6.128952] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [204daccf1940f0c4f3516a79fa5ec8606b7d9a37539292d078c64acd171b41cb] <==
	{"level":"warn","ts":"2025-10-02T21:52:42.820979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:42.845990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:42.875102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:42.949512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:42.969336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:42.969882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.010151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.027851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.053426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.097909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.127295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.178556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.203367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.241040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.269893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.306692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.319662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.356240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.390645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.401246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.424662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.457987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.478786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.504889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:52:43.636626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44776","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:53:21 up  6:35,  0 user,  load average: 3.91, 2.33, 1.74
	Linux no-preload-661954 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3f883e1e42343fc98fad9ab42b354919d305961beb28fc9e3a676c069c79fe69] <==
	I1002 21:52:56.416501       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:52:56.416902       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 21:52:56.417047       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:52:56.417065       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:52:56.417075       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:52:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:52:56.715454       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:52:56.715480       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:52:56.715490       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:52:56.716774       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 21:52:56.915875       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:52:56.915987       1 metrics.go:72] Registering metrics
	I1002 21:52:56.916072       1 controller.go:711] "Syncing nftables rules"
	I1002 21:53:06.722793       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:53:06.722855       1 main.go:301] handling current node
	I1002 21:53:16.715541       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:53:16.715575       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fb85df9c3527a42edccb80351aab9a3d1c62e6e2ee0bba7dc643aeb3dee4ccf4] <==
	E1002 21:52:44.710751       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1002 21:52:44.741105       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 21:52:44.759105       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 21:52:44.768562       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:52:44.768614       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1002 21:52:44.781811       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:52:44.785092       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:52:44.912676       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:52:45.389257       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 21:52:45.414354       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 21:52:45.414595       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:52:46.345444       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:52:46.400694       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:52:46.481423       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 21:52:46.488909       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1002 21:52:46.490205       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 21:52:46.496748       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:52:46.604756       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:52:47.548299       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:52:47.585499       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 21:52:47.599703       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 21:52:51.808640       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 21:52:51.862181       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:52:51.867232       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:52:52.802459       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [dfc912d2c5f0276789c18272949f7991dd61e792d3062f9e3b2f89493bede89f] <==
	I1002 21:52:51.636238       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:52:51.636250       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:52:51.651768       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 21:52:51.651810       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 21:52:51.651868       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 21:52:51.651936       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 21:52:51.652000       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-661954"
	I1002 21:52:51.652055       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 21:52:51.652281       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 21:52:51.652382       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 21:52:51.652559       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 21:52:51.652943       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 21:52:51.653902       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:52:51.653940       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 21:52:51.654224       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 21:52:51.654672       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 21:52:51.654752       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 21:52:51.655328       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 21:52:51.655936       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 21:52:51.657035       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 21:52:51.661323       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 21:52:51.668688       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:52:51.678918       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 21:52:51.680102       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 21:53:11.654587       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [eea9240f2c5dfdfea28b64b466e1a5fcf54d319e6776ab07cffb61e6963f912b] <==
	I1002 21:52:54.091945       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:52:54.318543       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:52:54.418901       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:52:54.418933       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 21:52:54.419016       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:52:54.477846       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:52:54.477905       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:52:54.487063       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:52:54.487750       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:52:54.487777       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:52:54.490233       1 config.go:200] "Starting service config controller"
	I1002 21:52:54.490256       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:52:54.490277       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:52:54.490281       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:52:54.490303       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:52:54.490307       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:52:54.492121       1 config.go:309] "Starting node config controller"
	I1002 21:52:54.492134       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:52:54.492141       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:52:54.590653       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:52:54.590688       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:52:54.590739       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c322d89b788d5a8bb04c025a276316df8a7f1126fc9bec9d02279c2403ff19c4] <==
	E1002 21:52:44.645151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 21:52:44.645217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 21:52:44.645321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 21:52:44.645485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 21:52:44.645551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 21:52:44.645609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 21:52:44.645676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 21:52:44.645735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 21:52:44.649474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 21:52:44.649568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:52:45.521627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 21:52:45.573590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 21:52:45.623241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 21:52:45.632259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 21:52:45.632327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 21:52:45.699576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 21:52:45.730353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 21:52:45.780991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 21:52:45.793766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 21:52:45.834645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 21:52:45.940308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:52:45.954664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 21:52:45.961950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 21:52:45.968302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1002 21:52:49.117912       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 21:52:48 no-preload-661954 kubelet[1990]: I1002 21:52:48.596321    1990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-661954" podStartSLOduration=1.5963097560000001 podStartE2EDuration="1.596309756s" podCreationTimestamp="2025-10-02 21:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:52:48.575783028 +0000 UTC m=+1.215369368" watchObservedRunningTime="2025-10-02 21:52:48.596309756 +0000 UTC m=+1.235896088"
	Oct 02 21:52:48 no-preload-661954 kubelet[1990]: E1002 21:52:48.619967    1990 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-no-preload-661954\" already exists" pod="kube-system/etcd-no-preload-661954"
	Oct 02 21:52:48 no-preload-661954 kubelet[1990]: E1002 21:52:48.628660    1990 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-no-preload-661954\" already exists" pod="kube-system/kube-scheduler-no-preload-661954"
	Oct 02 21:52:51 no-preload-661954 kubelet[1990]: I1002 21:52:51.670924    1990 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 02 21:52:51 no-preload-661954 kubelet[1990]: I1002 21:52:51.671709    1990 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 21:52:53 no-preload-661954 kubelet[1990]: I1002 21:52:53.129931    1990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51774e0f-371e-4a31-801f-9ca681eefe74-lib-modules\") pod \"kube-proxy-5jstv\" (UID: \"51774e0f-371e-4a31-801f-9ca681eefe74\") " pod="kube-system/kube-proxy-5jstv"
	Oct 02 21:52:53 no-preload-661954 kubelet[1990]: I1002 21:52:53.129975    1990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs86q\" (UniqueName: \"kubernetes.io/projected/51774e0f-371e-4a31-801f-9ca681eefe74-kube-api-access-qs86q\") pod \"kube-proxy-5jstv\" (UID: \"51774e0f-371e-4a31-801f-9ca681eefe74\") " pod="kube-system/kube-proxy-5jstv"
	Oct 02 21:52:53 no-preload-661954 kubelet[1990]: I1002 21:52:53.130001    1990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51774e0f-371e-4a31-801f-9ca681eefe74-kube-proxy\") pod \"kube-proxy-5jstv\" (UID: \"51774e0f-371e-4a31-801f-9ca681eefe74\") " pod="kube-system/kube-proxy-5jstv"
	Oct 02 21:52:53 no-preload-661954 kubelet[1990]: I1002 21:52:53.130022    1990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51774e0f-371e-4a31-801f-9ca681eefe74-xtables-lock\") pod \"kube-proxy-5jstv\" (UID: \"51774e0f-371e-4a31-801f-9ca681eefe74\") " pod="kube-system/kube-proxy-5jstv"
	Oct 02 21:52:53 no-preload-661954 kubelet[1990]: I1002 21:52:53.234076    1990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bb04d59b-0f2b-44db-bbd1-53d35a0d1406-cni-cfg\") pod \"kindnet-flmgm\" (UID: \"bb04d59b-0f2b-44db-bbd1-53d35a0d1406\") " pod="kube-system/kindnet-flmgm"
	Oct 02 21:52:53 no-preload-661954 kubelet[1990]: I1002 21:52:53.234176    1990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb04d59b-0f2b-44db-bbd1-53d35a0d1406-xtables-lock\") pod \"kindnet-flmgm\" (UID: \"bb04d59b-0f2b-44db-bbd1-53d35a0d1406\") " pod="kube-system/kindnet-flmgm"
	Oct 02 21:52:53 no-preload-661954 kubelet[1990]: I1002 21:52:53.234197    1990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb04d59b-0f2b-44db-bbd1-53d35a0d1406-lib-modules\") pod \"kindnet-flmgm\" (UID: \"bb04d59b-0f2b-44db-bbd1-53d35a0d1406\") " pod="kube-system/kindnet-flmgm"
	Oct 02 21:52:53 no-preload-661954 kubelet[1990]: I1002 21:52:53.234237    1990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pljrv\" (UniqueName: \"kubernetes.io/projected/bb04d59b-0f2b-44db-bbd1-53d35a0d1406-kube-api-access-pljrv\") pod \"kindnet-flmgm\" (UID: \"bb04d59b-0f2b-44db-bbd1-53d35a0d1406\") " pod="kube-system/kindnet-flmgm"
	Oct 02 21:52:53 no-preload-661954 kubelet[1990]: I1002 21:52:53.293653    1990 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 21:52:53 no-preload-661954 kubelet[1990]: I1002 21:52:53.646404    1990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5jstv" podStartSLOduration=1.646386597 podStartE2EDuration="1.646386597s" podCreationTimestamp="2025-10-02 21:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:52:53.628119583 +0000 UTC m=+6.267705932" watchObservedRunningTime="2025-10-02 21:52:53.646386597 +0000 UTC m=+6.285972929"
	Oct 02 21:52:56 no-preload-661954 kubelet[1990]: I1002 21:52:56.630553    1990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-flmgm" podStartSLOduration=1.736477785 podStartE2EDuration="4.630496002s" podCreationTimestamp="2025-10-02 21:52:52 +0000 UTC" firstStartedPulling="2025-10-02 21:52:53.45399213 +0000 UTC m=+6.093578470" lastFinishedPulling="2025-10-02 21:52:56.348010339 +0000 UTC m=+8.987596687" observedRunningTime="2025-10-02 21:52:56.629517574 +0000 UTC m=+9.269103914" watchObservedRunningTime="2025-10-02 21:52:56.630496002 +0000 UTC m=+9.270082342"
	Oct 02 21:53:06 no-preload-661954 kubelet[1990]: I1002 21:53:06.981490    1990 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 02 21:53:07 no-preload-661954 kubelet[1990]: I1002 21:53:07.139928    1990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/862f39f9-ad9b-4268-86f6-775e9221224b-tmp\") pod \"storage-provisioner\" (UID: \"862f39f9-ad9b-4268-86f6-775e9221224b\") " pod="kube-system/storage-provisioner"
	Oct 02 21:53:07 no-preload-661954 kubelet[1990]: I1002 21:53:07.140145    1990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af6a1936-ec5c-4c31-9f22-73cc6f7042c3-config-volume\") pod \"coredns-66bc5c9577-ddsr2\" (UID: \"af6a1936-ec5c-4c31-9f22-73cc6f7042c3\") " pod="kube-system/coredns-66bc5c9577-ddsr2"
	Oct 02 21:53:07 no-preload-661954 kubelet[1990]: I1002 21:53:07.140241    1990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv587\" (UniqueName: \"kubernetes.io/projected/af6a1936-ec5c-4c31-9f22-73cc6f7042c3-kube-api-access-hv587\") pod \"coredns-66bc5c9577-ddsr2\" (UID: \"af6a1936-ec5c-4c31-9f22-73cc6f7042c3\") " pod="kube-system/coredns-66bc5c9577-ddsr2"
	Oct 02 21:53:07 no-preload-661954 kubelet[1990]: I1002 21:53:07.140328    1990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbfw5\" (UniqueName: \"kubernetes.io/projected/862f39f9-ad9b-4268-86f6-775e9221224b-kube-api-access-bbfw5\") pod \"storage-provisioner\" (UID: \"862f39f9-ad9b-4268-86f6-775e9221224b\") " pod="kube-system/storage-provisioner"
	Oct 02 21:53:07 no-preload-661954 kubelet[1990]: W1002 21:53:07.459957    1990 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135/crio-40c790c55cc65562018f1961c5a4eba71f8cd33f6ab186d07f5a648aaeaeaf8d WatchSource:0}: Error finding container 40c790c55cc65562018f1961c5a4eba71f8cd33f6ab186d07f5a648aaeaeaf8d: Status 404 returned error can't find the container with id 40c790c55cc65562018f1961c5a4eba71f8cd33f6ab186d07f5a648aaeaeaf8d
	Oct 02 21:53:07 no-preload-661954 kubelet[1990]: I1002 21:53:07.705461    1990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ddsr2" podStartSLOduration=15.705443743 podStartE2EDuration="15.705443743s" podCreationTimestamp="2025-10-02 21:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:53:07.705114611 +0000 UTC m=+20.344700951" watchObservedRunningTime="2025-10-02 21:53:07.705443743 +0000 UTC m=+20.345030075"
	Oct 02 21:53:08 no-preload-661954 kubelet[1990]: I1002 21:53:08.680347    1990 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.68032804 podStartE2EDuration="14.68032804s" podCreationTimestamp="2025-10-02 21:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:53:07.739496515 +0000 UTC m=+20.379082888" watchObservedRunningTime="2025-10-02 21:53:08.68032804 +0000 UTC m=+21.319914371"
	Oct 02 21:53:10 no-preload-661954 kubelet[1990]: I1002 21:53:10.784513    1990 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gvv2\" (UniqueName: \"kubernetes.io/projected/2f75d586-1180-436a-8778-b22230b1b890-kube-api-access-6gvv2\") pod \"busybox\" (UID: \"2f75d586-1180-436a-8778-b22230b1b890\") " pod="default/busybox"
	
	
	==> storage-provisioner [a95b190e85dee263e72279b4da8b635ff83603b4c095dcfe7a4755c49d1a9a55] <==
	I1002 21:53:07.465365       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 21:53:07.491023       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 21:53:07.491060       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 21:53:07.535556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:53:07.545374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:53:07.545584       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:53:07.545768       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-661954_9a8d5044-847c-432a-97ae-3cd5ce76c65e!
	I1002 21:53:07.554685       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"93c0aec5-6a68-4ee7-97c4-954139f85db0", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-661954_9a8d5044-847c-432a-97ae-3cd5ce76c65e became leader
	W1002 21:53:07.558841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:53:07.577582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:53:07.646379       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-661954_9a8d5044-847c-432a-97ae-3cd5ce76c65e!
	W1002 21:53:09.581661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:53:09.587937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:53:11.591588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:53:11.597255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:53:13.600545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:53:13.614946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:53:15.618184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:53:15.628330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:53:17.631689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:53:17.636562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:53:19.640081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:53:19.649572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-661954 -n no-preload-661954
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-661954 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-661954 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-661954 --alsologtostderr -v=1: exit status 80 (1.941483206s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-661954 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:54:46.436149 1195376 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:54:46.436321 1195376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:54:46.436356 1195376 out.go:374] Setting ErrFile to fd 2...
	I1002 21:54:46.436376 1195376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:54:46.436671 1195376 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:54:46.436975 1195376 out.go:368] Setting JSON to false
	I1002 21:54:46.437027 1195376 mustload.go:65] Loading cluster: no-preload-661954
	I1002 21:54:46.437415 1195376 config.go:182] Loaded profile config "no-preload-661954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:54:46.437917 1195376 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:54:46.457173 1195376 host.go:66] Checking if "no-preload-661954" exists ...
	I1002 21:54:46.457535 1195376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:54:46.525159 1195376 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 21:54:46.515350514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:54:46.525819 1195376 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-661954 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 21:54:46.529316 1195376 out.go:179] * Pausing node no-preload-661954 ... 
	I1002 21:54:46.532237 1195376 host.go:66] Checking if "no-preload-661954" exists ...
	I1002 21:54:46.532584 1195376 ssh_runner.go:195] Run: systemctl --version
	I1002 21:54:46.532638 1195376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:54:46.555174 1195376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:54:46.648834 1195376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:54:46.672172 1195376 pause.go:51] kubelet running: true
	I1002 21:54:46.672247 1195376 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:54:46.933759 1195376 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:54:46.933866 1195376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:54:47.007110 1195376 cri.go:89] found id: "032a3b41ed0c467e55383a336d6fd7f6f244fd085545de3a0e761d76b74d86f8"
	I1002 21:54:47.007133 1195376 cri.go:89] found id: "4e0dc1463793285417d2b9579a95d8747376988c14fdab1646a7485e46504495"
	I1002 21:54:47.007142 1195376 cri.go:89] found id: "6b82861a8945a6d58ec459cbce94b85d54a3a5234cc6ba7d3d096a78eb01fdee"
	I1002 21:54:47.007157 1195376 cri.go:89] found id: "9e3cec57132b725065e74b658befc5de805ca717fa3dc565174c378bb7fcc9c5"
	I1002 21:54:47.007161 1195376 cri.go:89] found id: "a6ab31d1759e69dc797d55b97650619bcf6b2ffed03ceade3ad78af7a9ef9788"
	I1002 21:54:47.007165 1195376 cri.go:89] found id: "3cf04b502d36e04bbf37deadf1e0009a85fad1306e14b2a672afa8633ea20e98"
	I1002 21:54:47.007168 1195376 cri.go:89] found id: "c31f86dc038a7a8cda44b794e533fc5838b3adc20907a05388b6afdffa5ec270"
	I1002 21:54:47.007171 1195376 cri.go:89] found id: "88076a11fa43f4493d4db1e7eaa62ad0cda2f161d8d48715aea30b33259edee2"
	I1002 21:54:47.007175 1195376 cri.go:89] found id: "5cd95915db6182c9baece2f44a4bb2de93f9d4189ac588470b8045a3fc361f08"
	I1002 21:54:47.007189 1195376 cri.go:89] found id: "859aca45b87df60af8c0060e3cca60e3bd6f08c498629d13156197b0ebed33c1"
	I1002 21:54:47.007196 1195376 cri.go:89] found id: "7e7d0b2884e0b1793d47f40952ea30e017bf6f19ac40af2b25670184e8f23167"
	I1002 21:54:47.007199 1195376 cri.go:89] found id: ""
	I1002 21:54:47.007248 1195376 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:54:47.026335 1195376 retry.go:31] will retry after 231.569405ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:54:47Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:54:47.258836 1195376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:54:47.272831 1195376 pause.go:51] kubelet running: false
	I1002 21:54:47.272948 1195376 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:54:47.455522 1195376 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:54:47.455622 1195376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:54:47.533106 1195376 cri.go:89] found id: "032a3b41ed0c467e55383a336d6fd7f6f244fd085545de3a0e761d76b74d86f8"
	I1002 21:54:47.533130 1195376 cri.go:89] found id: "4e0dc1463793285417d2b9579a95d8747376988c14fdab1646a7485e46504495"
	I1002 21:54:47.533135 1195376 cri.go:89] found id: "6b82861a8945a6d58ec459cbce94b85d54a3a5234cc6ba7d3d096a78eb01fdee"
	I1002 21:54:47.533139 1195376 cri.go:89] found id: "9e3cec57132b725065e74b658befc5de805ca717fa3dc565174c378bb7fcc9c5"
	I1002 21:54:47.533142 1195376 cri.go:89] found id: "a6ab31d1759e69dc797d55b97650619bcf6b2ffed03ceade3ad78af7a9ef9788"
	I1002 21:54:47.533145 1195376 cri.go:89] found id: "3cf04b502d36e04bbf37deadf1e0009a85fad1306e14b2a672afa8633ea20e98"
	I1002 21:54:47.533148 1195376 cri.go:89] found id: "c31f86dc038a7a8cda44b794e533fc5838b3adc20907a05388b6afdffa5ec270"
	I1002 21:54:47.533151 1195376 cri.go:89] found id: "88076a11fa43f4493d4db1e7eaa62ad0cda2f161d8d48715aea30b33259edee2"
	I1002 21:54:47.533155 1195376 cri.go:89] found id: "5cd95915db6182c9baece2f44a4bb2de93f9d4189ac588470b8045a3fc361f08"
	I1002 21:54:47.533161 1195376 cri.go:89] found id: "859aca45b87df60af8c0060e3cca60e3bd6f08c498629d13156197b0ebed33c1"
	I1002 21:54:47.533165 1195376 cri.go:89] found id: "7e7d0b2884e0b1793d47f40952ea30e017bf6f19ac40af2b25670184e8f23167"
	I1002 21:54:47.533168 1195376 cri.go:89] found id: ""
	I1002 21:54:47.533222 1195376 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:54:47.545470 1195376 retry.go:31] will retry after 468.778223ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:54:47Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:54:48.014816 1195376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:54:48.029803 1195376 pause.go:51] kubelet running: false
	I1002 21:54:48.029911 1195376 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:54:48.220731 1195376 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:54:48.220827 1195376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:54:48.294131 1195376 cri.go:89] found id: "032a3b41ed0c467e55383a336d6fd7f6f244fd085545de3a0e761d76b74d86f8"
	I1002 21:54:48.294154 1195376 cri.go:89] found id: "4e0dc1463793285417d2b9579a95d8747376988c14fdab1646a7485e46504495"
	I1002 21:54:48.294159 1195376 cri.go:89] found id: "6b82861a8945a6d58ec459cbce94b85d54a3a5234cc6ba7d3d096a78eb01fdee"
	I1002 21:54:48.294164 1195376 cri.go:89] found id: "9e3cec57132b725065e74b658befc5de805ca717fa3dc565174c378bb7fcc9c5"
	I1002 21:54:48.294167 1195376 cri.go:89] found id: "a6ab31d1759e69dc797d55b97650619bcf6b2ffed03ceade3ad78af7a9ef9788"
	I1002 21:54:48.294171 1195376 cri.go:89] found id: "3cf04b502d36e04bbf37deadf1e0009a85fad1306e14b2a672afa8633ea20e98"
	I1002 21:54:48.294174 1195376 cri.go:89] found id: "c31f86dc038a7a8cda44b794e533fc5838b3adc20907a05388b6afdffa5ec270"
	I1002 21:54:48.294177 1195376 cri.go:89] found id: "88076a11fa43f4493d4db1e7eaa62ad0cda2f161d8d48715aea30b33259edee2"
	I1002 21:54:48.294180 1195376 cri.go:89] found id: "5cd95915db6182c9baece2f44a4bb2de93f9d4189ac588470b8045a3fc361f08"
	I1002 21:54:48.294187 1195376 cri.go:89] found id: "859aca45b87df60af8c0060e3cca60e3bd6f08c498629d13156197b0ebed33c1"
	I1002 21:54:48.294205 1195376 cri.go:89] found id: "7e7d0b2884e0b1793d47f40952ea30e017bf6f19ac40af2b25670184e8f23167"
	I1002 21:54:48.294214 1195376 cri.go:89] found id: ""
	I1002 21:54:48.294265 1195376 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:54:48.308716 1195376 out.go:203] 
	W1002 21:54:48.311770 1195376 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:54:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:54:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:54:48.311805 1195376 out.go:285] * 
	* 
	W1002 21:54:48.320020 1195376 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:54:48.322805 1195376 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-661954 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-661954
helpers_test.go:243: (dbg) docker inspect no-preload-661954:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135",
	        "Created": "2025-10-02T21:52:01.48084196Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1192593,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:53:35.269560752Z",
	            "FinishedAt": "2025-10-02T21:53:34.266431559Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135/hostname",
	        "HostsPath": "/var/lib/docker/containers/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135/hosts",
	        "LogPath": "/var/lib/docker/containers/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135-json.log",
	        "Name": "/no-preload-661954",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-661954:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-661954",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135",
	                "LowerDir": "/var/lib/docker/overlay2/79243a034d5e9dc6634af037ff52f641df0196883b405891b543a79ac513554b-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/79243a034d5e9dc6634af037ff52f641df0196883b405891b543a79ac513554b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/79243a034d5e9dc6634af037ff52f641df0196883b405891b543a79ac513554b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/79243a034d5e9dc6634af037ff52f641df0196883b405891b543a79ac513554b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-661954",
	                "Source": "/var/lib/docker/volumes/no-preload-661954/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-661954",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-661954",
	                "name.minikube.sigs.k8s.io": "no-preload-661954",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "201ec57258dfe58af14e6fd9c40093c8a8b69c803ac59f23c32654cb394f3949",
	            "SandboxKey": "/var/run/docker/netns/201ec57258df",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34201"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34202"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34205"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34203"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34204"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-661954": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:90:22:5f:aa:53",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "76a20fa3488da5dc1336dc065545b6cee383a338650fbc63c52f9a29c8b4abb9",
	                    "EndpointID": "dda2c79e86654c345a475640388dcb96c091457be23a942ef3efd12ab524e0c4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-661954",
	                        "f3d778675684"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-661954 -n no-preload-661954
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-661954 -n no-preload-661954: exit status 2 (343.38649ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-661954 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-661954 logs -n 25: (1.351568044s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ start   │ -p cert-expiration-955864 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-955864   │ jenkins │ v1.37.0 │ 02 Oct 25 21:48 UTC │ 02 Oct 25 21:48 UTC │
	│ delete  │ -p force-systemd-env-916563                                                                                                                                                                                                                   │ force-systemd-env-916563 │ jenkins │ v1.37.0 │ 02 Oct 25 21:49 UTC │ 02 Oct 25 21:49 UTC │
	│ start   │ -p cert-options-769461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-769461      │ jenkins │ v1.37.0 │ 02 Oct 25 21:49 UTC │ 02 Oct 25 21:50 UTC │
	│ ssh     │ cert-options-769461 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-769461      │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ ssh     │ -p cert-options-769461 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-769461      │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ delete  │ -p cert-options-769461                                                                                                                                                                                                                        │ cert-options-769461      │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ start   │ -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p cert-expiration-955864 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-955864   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-714101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │                     │
	│ stop    │ -p old-k8s-version-714101 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-714101 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:52 UTC │
	│ delete  │ -p cert-expiration-955864                                                                                                                                                                                                                     │ cert-expiration-955864   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:53 UTC │
	│ image   │ old-k8s-version-714101 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ pause   │ -p old-k8s-version-714101 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	│ delete  │ -p old-k8s-version-714101                                                                                                                                                                                                                     │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ delete  │ -p old-k8s-version-714101                                                                                                                                                                                                                     │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977       │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-661954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	│ stop    │ -p no-preload-661954 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ addons  │ enable dashboard -p no-preload-661954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ start   │ -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:54 UTC │
	│ image   │ no-preload-661954 image list --format=json                                                                                                                                                                                                    │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ pause   │ -p no-preload-661954 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:53:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:53:34.901160 1192467 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:53:34.901344 1192467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:53:34.901354 1192467 out.go:374] Setting ErrFile to fd 2...
	I1002 21:53:34.901359 1192467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:53:34.901603 1192467 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:53:34.901971 1192467 out.go:368] Setting JSON to false
	I1002 21:53:34.902943 1192467 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23752,"bootTime":1759418263,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:53:34.903008 1192467 start.go:140] virtualization:  
	I1002 21:53:34.906123 1192467 out.go:179] * [no-preload-661954] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:53:34.909913 1192467 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:53:34.909979 1192467 notify.go:221] Checking for updates...
	I1002 21:53:34.915971 1192467 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:53:34.918955 1192467 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:53:34.921858 1192467 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:53:34.925522 1192467 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:53:34.928444 1192467 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:53:34.931934 1192467 config.go:182] Loaded profile config "no-preload-661954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:53:34.932583 1192467 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:53:34.971588 1192467 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:53:34.971693 1192467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:53:35.062471 1192467 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:53:35.050304835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:53:35.062582 1192467 docker.go:319] overlay module found
	I1002 21:53:35.065722 1192467 out.go:179] * Using the docker driver based on existing profile
	I1002 21:53:30.791545 1189833 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:53:31.561972 1189833 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:53:31.562140 1189833 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-132977 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 21:53:32.528147 1189833 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:53:32.528501 1189833 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-132977 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 21:53:33.148400 1189833 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:53:33.396421 1189833 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:53:33.791661 1189833 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:53:33.792035 1189833 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:53:34.284468 1189833 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:53:35.068509 1192467 start.go:306] selected driver: docker
	I1002 21:53:35.068525 1192467 start.go:936] validating driver "docker" against &{Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:53:35.068625 1192467 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:53:35.069310 1192467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:53:35.175126 1192467 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:53:35.156629246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:53:35.175468 1192467 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:53:35.175494 1192467 cni.go:84] Creating CNI manager for ""
	I1002 21:53:35.175552 1192467 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:53:35.175584 1192467 start.go:350] cluster config:
	{Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:53:35.178968 1192467 out.go:179] * Starting "no-preload-661954" primary control-plane node in "no-preload-661954" cluster
	I1002 21:53:35.181760 1192467 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:53:35.184818 1192467 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:53:35.187591 1192467 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:53:35.187755 1192467 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/config.json ...
	I1002 21:53:35.188104 1192467 cache.go:107] acquiring lock: {Name:mk77546a797d48dfa87e4f15444ebfe2ae46de0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188183 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 21:53:35.188191 1192467 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.019µs
	I1002 21:53:35.188203 1192467 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 21:53:35.188217 1192467 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:53:35.188439 1192467 cache.go:107] acquiring lock: {Name:mkb30203224ed1c1a4b88d93d3aeb9a29d46fb20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188507 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1002 21:53:35.188515 1192467 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 80.859µs
	I1002 21:53:35.188521 1192467 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1002 21:53:35.188533 1192467 cache.go:107] acquiring lock: {Name:mk2aab2e3052911889ff3d13b07414606ffa2c8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188567 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1002 21:53:35.188572 1192467 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 41.386µs
	I1002 21:53:35.188578 1192467 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1002 21:53:35.188587 1192467 cache.go:107] acquiring lock: {Name:mkb1bbde6510d7fb66d3923ec81dcf1545e1aeca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188613 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1002 21:53:35.188618 1192467 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 32.73µs
	I1002 21:53:35.188624 1192467 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1002 21:53:35.188633 1192467 cache.go:107] acquiring lock: {Name:mk783e98a1246826a6f16b0bd25f720d93184154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188658 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1002 21:53:35.188663 1192467 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.76µs
	I1002 21:53:35.188676 1192467 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1002 21:53:35.188687 1192467 cache.go:107] acquiring lock: {Name:mk232b04a28dc0f5922a8e36bb60d83a371a69dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188713 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1002 21:53:35.188717 1192467 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 31.606µs
	I1002 21:53:35.188723 1192467 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1002 21:53:35.188732 1192467 cache.go:107] acquiring lock: {Name:mk17c8111e11ff4babf675464dda89dffef8dccd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188757 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1002 21:53:35.188763 1192467 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.401µs
	I1002 21:53:35.188879 1192467 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1002 21:53:35.188898 1192467 cache.go:107] acquiring lock: {Name:mkb9b4c6e229a9543f9236d679c4b53878bc9ce5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188953 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1002 21:53:35.188961 1192467 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 65.787µs
	I1002 21:53:35.188967 1192467 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1002 21:53:35.188974 1192467 cache.go:87] Successfully saved all images to host disk.
	I1002 21:53:35.209172 1192467 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:53:35.209192 1192467 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:53:35.209203 1192467 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:53:35.209225 1192467 start.go:361] acquireMachinesLock for no-preload-661954: {Name:mk6a385b42202eaf12d2e98c4a7f7a9c153c60e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.209273 1192467 start.go:365] duration metric: took 32.262µs to acquireMachinesLock for "no-preload-661954"
	I1002 21:53:35.209292 1192467 start.go:97] Skipping create...Using existing machine configuration
	I1002 21:53:35.209297 1192467 fix.go:55] fixHost starting: 
	I1002 21:53:35.209553 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:35.231660 1192467 fix.go:113] recreateIfNeeded on no-preload-661954: state=Stopped err=<nil>
	W1002 21:53:35.231690 1192467 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 21:53:35.146380 1189833 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:53:35.272785 1189833 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:53:36.887132 1189833 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:53:38.110579 1189833 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:53:38.111916 1189833 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:53:38.114470 1189833 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:53:35.234941 1192467 out.go:252] * Restarting existing docker container for "no-preload-661954" ...
	I1002 21:53:35.235048 1192467 cli_runner.go:164] Run: docker start no-preload-661954
	I1002 21:53:35.619228 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:35.647925 1192467 kic.go:430] container "no-preload-661954" state is running.
	I1002 21:53:35.648332 1192467 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-661954
	I1002 21:53:35.670854 1192467 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/config.json ...
	I1002 21:53:35.671096 1192467 machine.go:93] provisionDockerMachine start ...
	I1002 21:53:35.671161 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:35.703665 1192467 main.go:141] libmachine: Using SSH client type: native
	I1002 21:53:35.703994 1192467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I1002 21:53:35.704006 1192467 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:53:35.704610 1192467 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60724->127.0.0.1:34201: read: connection reset by peer
	I1002 21:53:38.857630 1192467 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-661954
	
	I1002 21:53:38.857699 1192467 ubuntu.go:182] provisioning hostname "no-preload-661954"
	I1002 21:53:38.857794 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:38.878845 1192467 main.go:141] libmachine: Using SSH client type: native
	I1002 21:53:38.879146 1192467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I1002 21:53:38.879163 1192467 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-661954 && echo "no-preload-661954" | sudo tee /etc/hostname
	I1002 21:53:39.021606 1192467 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-661954
	
	I1002 21:53:39.021702 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:39.040144 1192467 main.go:141] libmachine: Using SSH client type: native
	I1002 21:53:39.040465 1192467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I1002 21:53:39.040489 1192467 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-661954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-661954/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-661954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:53:39.174332 1192467 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:53:39.174356 1192467 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:53:39.174381 1192467 ubuntu.go:190] setting up certificates
	I1002 21:53:39.174390 1192467 provision.go:84] configureAuth start
	I1002 21:53:39.174462 1192467 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-661954
	I1002 21:53:39.199440 1192467 provision.go:143] copyHostCerts
	I1002 21:53:39.199504 1192467 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:53:39.199513 1192467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:53:39.199565 1192467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:53:39.199656 1192467 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:53:39.199661 1192467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:53:39.199687 1192467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:53:39.199745 1192467 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:53:39.199749 1192467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:53:39.199783 1192467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:53:39.199839 1192467 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.no-preload-661954 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-661954]
	I1002 21:53:39.732249 1192467 provision.go:177] copyRemoteCerts
	I1002 21:53:39.732321 1192467 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:53:39.732369 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:39.750662 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:39.860304 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:53:39.884742 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 21:53:38.118027 1189833 out.go:252]   - Booting up control plane ...
	I1002 21:53:38.118154 1189833 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:53:38.118243 1189833 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:53:38.118329 1189833 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:53:38.137079 1189833 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:53:38.137300 1189833 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:53:38.146003 1189833 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:53:38.146568 1189833 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:53:38.146815 1189833 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:53:38.276397 1189833 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:53:38.276520 1189833 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:53:38.790399 1189833 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 510.980423ms
	I1002 21:53:38.790928 1189833 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:53:38.791254 1189833 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 21:53:38.791540 1189833 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:53:38.792552 1189833 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:53:39.918483 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:53:39.949909 1192467 provision.go:87] duration metric: took 775.494692ms to configureAuth
	I1002 21:53:39.949940 1192467 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:53:39.950130 1192467 config.go:182] Loaded profile config "no-preload-661954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:53:39.950234 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:39.982165 1192467 main.go:141] libmachine: Using SSH client type: native
	I1002 21:53:39.982524 1192467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I1002 21:53:39.982550 1192467 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:53:40.431478 1192467 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:53:40.431547 1192467 machine.go:96] duration metric: took 4.760440429s to provisionDockerMachine
	I1002 21:53:40.431574 1192467 start.go:294] postStartSetup for "no-preload-661954" (driver="docker")
	I1002 21:53:40.431603 1192467 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:53:40.431723 1192467 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:53:40.431800 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:40.460589 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:40.579352 1192467 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:53:40.582836 1192467 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:53:40.582872 1192467 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:53:40.582883 1192467 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:53:40.582946 1192467 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:53:40.583041 1192467 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:53:40.583155 1192467 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:53:40.591092 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:53:40.616605 1192467 start.go:297] duration metric: took 184.998596ms for postStartSetup
	I1002 21:53:40.616696 1192467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:53:40.616844 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:40.647561 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:40.750782 1192467 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:53:40.756954 1192467 fix.go:57] duration metric: took 5.547649748s for fixHost
	I1002 21:53:40.756981 1192467 start.go:84] releasing machines lock for "no-preload-661954", held for 5.547699282s
	I1002 21:53:40.757047 1192467 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-661954
	I1002 21:53:40.784891 1192467 ssh_runner.go:195] Run: cat /version.json
	I1002 21:53:40.784948 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:40.785190 1192467 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:53:40.785240 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:40.822484 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:40.822940 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:40.937967 1192467 ssh_runner.go:195] Run: systemctl --version
	I1002 21:53:41.061857 1192467 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:53:41.145969 1192467 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:53:41.151940 1192467 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:53:41.152019 1192467 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:53:41.165141 1192467 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:53:41.165186 1192467 start.go:496] detecting cgroup driver to use...
	I1002 21:53:41.165217 1192467 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:53:41.165275 1192467 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:53:41.188391 1192467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:53:41.213237 1192467 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:53:41.213309 1192467 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:53:41.238346 1192467 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:53:41.265240 1192467 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:53:41.496554 1192467 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:53:41.702649 1192467 docker.go:234] disabling docker service ...
	I1002 21:53:41.702738 1192467 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:53:41.723182 1192467 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:53:41.753668 1192467 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:53:41.948226 1192467 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:53:42.192815 1192467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:53:42.223559 1192467 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:53:42.251561 1192467 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:53:42.251654 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.267876 1192467 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:53:42.267981 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.285908 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.301305 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.318315 1192467 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:53:42.332682 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.345212 1192467 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.361714 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.378749 1192467 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:53:42.392058 1192467 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:53:42.404240 1192467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:53:42.610903 1192467 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:53:42.815298 1192467 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:53:42.815393 1192467 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:53:42.821814 1192467 start.go:564] Will wait 60s for crictl version
	I1002 21:53:42.821896 1192467 ssh_runner.go:195] Run: which crictl
	I1002 21:53:42.825340 1192467 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:53:42.877728 1192467 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:53:42.877820 1192467 ssh_runner.go:195] Run: crio --version
	I1002 21:53:42.940328 1192467 ssh_runner.go:195] Run: crio --version
	I1002 21:53:42.988804 1192467 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:53:42.991635 1192467 cli_runner.go:164] Run: docker network inspect no-preload-661954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:53:43.013876 1192467 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 21:53:43.017684 1192467 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:53:43.040354 1192467 kubeadm.go:883] updating cluster {Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:53:43.040474 1192467 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:53:43.040519 1192467 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:53:43.097583 1192467 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:53:43.097609 1192467 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:53:43.097617 1192467 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 21:53:43.097711 1192467 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-661954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:53:43.097796 1192467 ssh_runner.go:195] Run: crio config
	I1002 21:53:43.192119 1192467 cni.go:84] Creating CNI manager for ""
	I1002 21:53:43.192150 1192467 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:53:43.192168 1192467 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:53:43.192204 1192467 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-661954 NodeName:no-preload-661954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:53:43.192338 1192467 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-661954"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:53:43.192434 1192467 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:53:43.205178 1192467 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:53:43.205246 1192467 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:53:43.215550 1192467 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 21:53:43.239441 1192467 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:53:43.262457 1192467 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1002 21:53:43.284544 1192467 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:53:43.293407 1192467 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:53:43.307245 1192467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:53:43.506121 1192467 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:53:43.523524 1192467 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954 for IP: 192.168.85.2
	I1002 21:53:43.523545 1192467 certs.go:195] generating shared ca certs ...
	I1002 21:53:43.523561 1192467 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:43.523728 1192467 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:53:43.523791 1192467 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:53:43.523803 1192467 certs.go:257] generating profile certs ...
	I1002 21:53:43.523918 1192467 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.key
	I1002 21:53:43.523983 1192467 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key.ffe6e5b4
	I1002 21:53:43.524026 1192467 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.key
	I1002 21:53:43.524152 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:53:43.524198 1192467 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:53:43.524211 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:53:43.524234 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:53:43.524263 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:53:43.524302 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:53:43.524359 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:53:43.525092 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:53:43.586699 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:53:43.621808 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:53:43.686543 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:53:43.735499 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:53:43.762086 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:53:43.794944 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:53:43.868155 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:53:43.923880 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:53:43.951493 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:53:43.988190 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:53:44.019229 1192467 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:53:44.046658 1192467 ssh_runner.go:195] Run: openssl version
	I1002 21:53:44.053428 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:53:44.063274 1192467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:53:44.070507 1192467 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:53:44.070596 1192467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:53:44.113319 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:53:44.122171 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:53:44.131226 1192467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:53:44.136447 1192467 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:53:44.136521 1192467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:53:44.182170 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:53:44.195482 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:53:44.207002 1192467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:53:44.211690 1192467 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:53:44.211780 1192467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:53:44.256193 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:53:44.264627 1192467 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:53:44.268830 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:53:44.317092 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:53:44.387292 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:53:44.526916 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:53:44.730899 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:53:44.894226 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:53:45.002059 1192467 kubeadm.go:400] StartCluster: {Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:53:45.002171 1192467 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:53:45.002259 1192467 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:53:45.087963 1192467 cri.go:89] found id: "3cf04b502d36e04bbf37deadf1e0009a85fad1306e14b2a672afa8633ea20e98"
	I1002 21:53:45.088014 1192467 cri.go:89] found id: "c31f86dc038a7a8cda44b794e533fc5838b3adc20907a05388b6afdffa5ec270"
	I1002 21:53:45.088021 1192467 cri.go:89] found id: "88076a11fa43f4493d4db1e7eaa62ad0cda2f161d8d48715aea30b33259edee2"
	I1002 21:53:45.088025 1192467 cri.go:89] found id: "5cd95915db6182c9baece2f44a4bb2de93f9d4189ac588470b8045a3fc361f08"
	I1002 21:53:45.088037 1192467 cri.go:89] found id: ""
	I1002 21:53:45.088116 1192467 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 21:53:45.106135 1192467 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:53:45Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:53:45.106285 1192467 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:53:45.127163 1192467 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:53:45.127205 1192467 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:53:45.127315 1192467 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:53:45.145179 1192467 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:53:45.145726 1192467 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-661954" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:53:45.145883 1192467 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-992084/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-661954" cluster setting kubeconfig missing "no-preload-661954" context setting]
	I1002 21:53:45.146372 1192467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:45.148353 1192467 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:53:45.171240 1192467 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 21:53:45.171280 1192467 kubeadm.go:601] duration metric: took 44.0584ms to restartPrimaryControlPlane
	I1002 21:53:45.171301 1192467 kubeadm.go:402] duration metric: took 169.276623ms to StartCluster
	I1002 21:53:45.171317 1192467 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:45.171405 1192467 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:53:45.172141 1192467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:45.172397 1192467 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:53:45.172731 1192467 config.go:182] Loaded profile config "no-preload-661954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:53:45.172795 1192467 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:53:45.172941 1192467 addons.go:69] Setting storage-provisioner=true in profile "no-preload-661954"
	I1002 21:53:45.172962 1192467 addons.go:238] Setting addon storage-provisioner=true in "no-preload-661954"
	W1002 21:53:45.172971 1192467 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:53:45.172986 1192467 addons.go:69] Setting dashboard=true in profile "no-preload-661954"
	I1002 21:53:45.173070 1192467 addons.go:238] Setting addon dashboard=true in "no-preload-661954"
	W1002 21:53:45.173108 1192467 addons.go:247] addon dashboard should already be in state true
	I1002 21:53:45.173158 1192467 host.go:66] Checking if "no-preload-661954" exists ...
	I1002 21:53:45.172993 1192467 host.go:66] Checking if "no-preload-661954" exists ...
	I1002 21:53:45.173802 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:45.173831 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:45.173011 1192467 addons.go:69] Setting default-storageclass=true in profile "no-preload-661954"
	I1002 21:53:45.174417 1192467 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-661954"
	I1002 21:53:45.174758 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:45.178130 1192467 out.go:179] * Verifying Kubernetes components...
	I1002 21:53:45.184246 1192467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:53:45.223050 1192467 addons.go:238] Setting addon default-storageclass=true in "no-preload-661954"
	W1002 21:53:45.223076 1192467 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:53:45.223104 1192467 host.go:66] Checking if "no-preload-661954" exists ...
	I1002 21:53:45.223578 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:45.253512 1192467 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:53:45.256789 1192467 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:53:45.256819 1192467 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:53:45.256917 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:45.265209 1192467 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 21:53:45.272097 1192467 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 21:53:47.251888 1189833 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.458509189s
	I1002 21:53:48.346662 1189833 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 9.553935935s
	I1002 21:53:49.793477 1189833 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 11.001693502s
	I1002 21:53:49.824620 1189833 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:53:49.862218 1189833 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:53:49.880328 1189833 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:53:49.880555 1189833 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-132977 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 21:53:49.899333 1189833 kubeadm.go:318] [bootstrap-token] Using token: 21plum.6l6cs3s9kwcorv4m
	I1002 21:53:45.275305 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 21:53:45.275342 1192467 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 21:53:45.275420 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:45.296741 1192467 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:53:45.296773 1192467 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:53:45.296850 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:45.321320 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:45.368395 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:45.374254 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:45.730839 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 21:53:45.730866 1192467 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 21:53:45.784541 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 21:53:45.784569 1192467 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 21:53:45.855767 1192467 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:53:45.865965 1192467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:53:45.869532 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 21:53:45.869557 1192467 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 21:53:45.885049 1192467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:53:45.961157 1192467 node_ready.go:35] waiting up to 6m0s for node "no-preload-661954" to be "Ready" ...
	I1002 21:53:45.963075 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 21:53:45.963130 1192467 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 21:53:46.130992 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 21:53:46.131067 1192467 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 21:53:46.259505 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 21:53:46.259570 1192467 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 21:53:46.362568 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 21:53:46.362647 1192467 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 21:53:46.404379 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 21:53:46.404444 1192467 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 21:53:46.439621 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:53:46.439701 1192467 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 21:53:46.463673 1192467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:53:49.902382 1189833 out.go:252]   - Configuring RBAC rules ...
	I1002 21:53:49.902508 1189833 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:53:49.914154 1189833 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:53:49.922679 1189833 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:53:49.927109 1189833 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:53:49.931429 1189833 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:53:49.936135 1189833 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:53:50.201066 1189833 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:53:50.702788 1189833 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 21:53:51.203959 1189833 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 21:53:51.205373 1189833 kubeadm.go:318] 
	I1002 21:53:51.205451 1189833 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 21:53:51.205465 1189833 kubeadm.go:318] 
	I1002 21:53:51.205549 1189833 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 21:53:51.205558 1189833 kubeadm.go:318] 
	I1002 21:53:51.205585 1189833 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 21:53:51.205645 1189833 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:53:51.205701 1189833 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:53:51.205710 1189833 kubeadm.go:318] 
	I1002 21:53:51.205763 1189833 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 21:53:51.205772 1189833 kubeadm.go:318] 
	I1002 21:53:51.205819 1189833 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 21:53:51.205826 1189833 kubeadm.go:318] 
	I1002 21:53:51.205878 1189833 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 21:53:51.205956 1189833 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:53:51.206027 1189833 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:53:51.206052 1189833 kubeadm.go:318] 
	I1002 21:53:51.206137 1189833 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:53:51.206218 1189833 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 21:53:51.206226 1189833 kubeadm.go:318] 
	I1002 21:53:51.206309 1189833 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 21plum.6l6cs3s9kwcorv4m \
	I1002 21:53:51.206415 1189833 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 \
	I1002 21:53:51.206439 1189833 kubeadm.go:318] 	--control-plane 
	I1002 21:53:51.206448 1189833 kubeadm.go:318] 
	I1002 21:53:51.206532 1189833 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:53:51.206541 1189833 kubeadm.go:318] 
	I1002 21:53:51.206629 1189833 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 21plum.6l6cs3s9kwcorv4m \
	I1002 21:53:51.206735 1189833 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 
	I1002 21:53:51.215083 1189833 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:53:51.215325 1189833 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:53:51.215466 1189833 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:53:51.215488 1189833 cni.go:84] Creating CNI manager for ""
	I1002 21:53:51.215496 1189833 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:53:51.218899 1189833 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 21:53:52.178341 1192467 node_ready.go:49] node "no-preload-661954" is "Ready"
	I1002 21:53:52.178366 1192467 node_ready.go:38] duration metric: took 6.217135309s for node "no-preload-661954" to be "Ready" ...
	I1002 21:53:52.178381 1192467 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:53:52.178441 1192467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:53:52.585784 1192467 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.719776673s)
	I1002 21:53:54.584206 1192467 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.699121666s)
	I1002 21:53:54.614118 1192467 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.150352846s)
	I1002 21:53:54.614340 1192467 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.435886189s)
	I1002 21:53:54.614380 1192467 api_server.go:72] duration metric: took 9.441950505s to wait for apiserver process to appear ...
	I1002 21:53:54.614401 1192467 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:53:54.614430 1192467 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:53:54.617022 1192467 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-661954 addons enable metrics-server
	
	I1002 21:53:54.620054 1192467 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1002 21:53:54.622900 1192467 addons.go:514] duration metric: took 9.450101363s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1002 21:53:54.630627 1192467 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:53:54.630697 1192467 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:53:51.221764 1189833 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:53:51.226028 1189833 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 21:53:51.226066 1189833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 21:53:51.254968 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:53:51.939290 1189833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:53:51.939413 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:51.939489 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-132977 minikube.k8s.io/updated_at=2025_10_02T21_53_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b minikube.k8s.io/name=embed-certs-132977 minikube.k8s.io/primary=true
	I1002 21:53:52.314456 1189833 ops.go:34] apiserver oom_adj: -16
	I1002 21:53:52.314561 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:52.815637 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:53.315233 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:53.814746 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:54.314889 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:54.814670 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:55.315642 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:55.815157 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:56.315270 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:56.433779 1189833 kubeadm.go:1113] duration metric: took 4.494413139s to wait for elevateKubeSystemPrivileges
	I1002 21:53:56.433817 1189833 kubeadm.go:402] duration metric: took 27.86764968s to StartCluster
	I1002 21:53:56.433835 1189833 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:56.433900 1189833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:53:56.435285 1189833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:56.435511 1189833 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:53:56.435638 1189833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 21:53:56.435896 1189833 config.go:182] Loaded profile config "embed-certs-132977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:53:56.435933 1189833 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:53:56.435991 1189833 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-132977"
	I1002 21:53:56.436007 1189833 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-132977"
	I1002 21:53:56.436027 1189833 host.go:66] Checking if "embed-certs-132977" exists ...
	I1002 21:53:56.436540 1189833 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:53:56.436923 1189833 addons.go:69] Setting default-storageclass=true in profile "embed-certs-132977"
	I1002 21:53:56.436947 1189833 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-132977"
	I1002 21:53:56.437223 1189833 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:53:56.439647 1189833 out.go:179] * Verifying Kubernetes components...
	I1002 21:53:56.443344 1189833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:53:56.476650 1189833 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:53:56.478543 1189833 addons.go:238] Setting addon default-storageclass=true in "embed-certs-132977"
	I1002 21:53:56.478584 1189833 host.go:66] Checking if "embed-certs-132977" exists ...
	I1002 21:53:56.479128 1189833 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:53:56.479690 1189833 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:53:56.479712 1189833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:53:56.479769 1189833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:53:56.524116 1189833 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:53:56.524137 1189833 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:53:56.524204 1189833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:53:56.524618 1189833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34196 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:53:56.565467 1189833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34196 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:53:56.834821 1189833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 21:53:56.927271 1189833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:53:56.955009 1189833 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:53:56.973420 1189833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:53:57.566058 1189833 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1002 21:53:57.900359 1189833 node_ready.go:35] waiting up to 6m0s for node "embed-certs-132977" to be "Ready" ...
	I1002 21:53:57.915026 1189833 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 21:53:55.114484 1192467 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:53:55.123286 1192467 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 21:53:55.124661 1192467 api_server.go:141] control plane version: v1.34.1
	I1002 21:53:55.124693 1192467 api_server.go:131] duration metric: took 510.273967ms to wait for apiserver health ...
	I1002 21:53:55.124703 1192467 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:53:55.128660 1192467 system_pods.go:59] 8 kube-system pods found
	I1002 21:53:55.128703 1192467 system_pods.go:61] "coredns-66bc5c9577-ddsr2" [af6a1936-ec5c-4c31-9f22-73cc6f7042c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:53:55.128760 1192467 system_pods.go:61] "etcd-no-preload-661954" [496cecf9-be4d-453d-aee5-11ec78563118] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:53:55.128777 1192467 system_pods.go:61] "kindnet-flmgm" [bb04d59b-0f2b-44db-bbd1-53d35a0d1406] Running
	I1002 21:53:55.128794 1192467 system_pods.go:61] "kube-apiserver-no-preload-661954" [73de157f-6464-4c1a-8de1-87a4fff66c68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:53:55.128828 1192467 system_pods.go:61] "kube-controller-manager-no-preload-661954" [eda68d1f-9389-46d7-88c2-3f2017805cf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:53:55.128841 1192467 system_pods.go:61] "kube-proxy-5jstv" [51774e0f-371e-4a31-801f-9ca681eefe74] Running
	I1002 21:53:55.128879 1192467 system_pods.go:61] "kube-scheduler-no-preload-661954" [2d738e6b-41aa-46db-b8a7-3d4a2e34487a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:53:55.128911 1192467 system_pods.go:61] "storage-provisioner" [862f39f9-ad9b-4268-86f6-775e9221224b] Running
	I1002 21:53:55.128919 1192467 system_pods.go:74] duration metric: took 4.210506ms to wait for pod list to return data ...
	I1002 21:53:55.128954 1192467 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:53:55.132297 1192467 default_sa.go:45] found service account: "default"
	I1002 21:53:55.132328 1192467 default_sa.go:55] duration metric: took 3.360478ms for default service account to be created ...
	I1002 21:53:55.132341 1192467 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:53:55.136969 1192467 system_pods.go:86] 8 kube-system pods found
	I1002 21:53:55.137010 1192467 system_pods.go:89] "coredns-66bc5c9577-ddsr2" [af6a1936-ec5c-4c31-9f22-73cc6f7042c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:53:55.137026 1192467 system_pods.go:89] "etcd-no-preload-661954" [496cecf9-be4d-453d-aee5-11ec78563118] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:53:55.137034 1192467 system_pods.go:89] "kindnet-flmgm" [bb04d59b-0f2b-44db-bbd1-53d35a0d1406] Running
	I1002 21:53:55.137042 1192467 system_pods.go:89] "kube-apiserver-no-preload-661954" [73de157f-6464-4c1a-8de1-87a4fff66c68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:53:55.137053 1192467 system_pods.go:89] "kube-controller-manager-no-preload-661954" [eda68d1f-9389-46d7-88c2-3f2017805cf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:53:55.137062 1192467 system_pods.go:89] "kube-proxy-5jstv" [51774e0f-371e-4a31-801f-9ca681eefe74] Running
	I1002 21:53:55.137069 1192467 system_pods.go:89] "kube-scheduler-no-preload-661954" [2d738e6b-41aa-46db-b8a7-3d4a2e34487a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:53:55.137078 1192467 system_pods.go:89] "storage-provisioner" [862f39f9-ad9b-4268-86f6-775e9221224b] Running
	I1002 21:53:55.137087 1192467 system_pods.go:126] duration metric: took 4.740634ms to wait for k8s-apps to be running ...
	I1002 21:53:55.137100 1192467 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:53:55.137170 1192467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:53:55.158776 1192467 system_svc.go:56] duration metric: took 21.666236ms WaitForService to wait for kubelet
	I1002 21:53:55.158878 1192467 kubeadm.go:586] duration metric: took 9.986436313s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:53:55.158941 1192467 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:53:55.162488 1192467 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:53:55.162583 1192467 node_conditions.go:123] node cpu capacity is 2
	I1002 21:53:55.162612 1192467 node_conditions.go:105] duration metric: took 3.648453ms to run NodePressure ...
	I1002 21:53:55.162651 1192467 start.go:242] waiting for startup goroutines ...
	I1002 21:53:55.162679 1192467 start.go:247] waiting for cluster config update ...
	I1002 21:53:55.162704 1192467 start.go:256] writing updated cluster config ...
	I1002 21:53:55.163077 1192467 ssh_runner.go:195] Run: rm -f paused
	I1002 21:53:55.167590 1192467 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:53:55.171504 1192467 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ddsr2" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 21:53:57.180373 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:53:59.678019 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	I1002 21:53:57.918188 1189833 addons.go:514] duration metric: took 1.482248421s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 21:53:58.071227 1189833 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-132977" context rescaled to 1 replicas
	W1002 21:53:59.905322 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:01.681466 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:04.177105 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:02.403445 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:04.405558 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:06.179313 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:08.678604 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:06.903317 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:09.403229 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:11.176586 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:13.677184 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:11.403384 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:13.903569 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:15.678985 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:18.177426 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:15.904067 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:18.403552 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:20.678796 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:23.177446 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:20.903769 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:22.903999 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:24.904140 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:25.178291 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:27.677592 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:26.904328 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:29.403133 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:30.177272 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:32.677201 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	I1002 21:54:33.176891 1192467 pod_ready.go:94] pod "coredns-66bc5c9577-ddsr2" is "Ready"
	I1002 21:54:33.176922 1192467 pod_ready.go:86] duration metric: took 38.005343021s for pod "coredns-66bc5c9577-ddsr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.179511 1192467 pod_ready.go:83] waiting for pod "etcd-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.184943 1192467 pod_ready.go:94] pod "etcd-no-preload-661954" is "Ready"
	I1002 21:54:33.184972 1192467 pod_ready.go:86] duration metric: took 5.432776ms for pod "etcd-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.187710 1192467 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.196382 1192467 pod_ready.go:94] pod "kube-apiserver-no-preload-661954" is "Ready"
	I1002 21:54:33.196413 1192467 pod_ready.go:86] duration metric: took 8.671641ms for pod "kube-apiserver-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.198899 1192467 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.375336 1192467 pod_ready.go:94] pod "kube-controller-manager-no-preload-661954" is "Ready"
	I1002 21:54:33.375367 1192467 pod_ready.go:86] duration metric: took 176.436003ms for pod "kube-controller-manager-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.575930 1192467 pod_ready.go:83] waiting for pod "kube-proxy-5jstv" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.975363 1192467 pod_ready.go:94] pod "kube-proxy-5jstv" is "Ready"
	I1002 21:54:33.975393 1192467 pod_ready.go:86] duration metric: took 399.437804ms for pod "kube-proxy-5jstv" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:34.180551 1192467 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:34.575430 1192467 pod_ready.go:94] pod "kube-scheduler-no-preload-661954" is "Ready"
	I1002 21:54:34.575460 1192467 pod_ready.go:86] duration metric: took 394.885383ms for pod "kube-scheduler-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:34.575481 1192467 pod_ready.go:40] duration metric: took 39.407775252s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:54:34.631486 1192467 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:54:34.634421 1192467 out.go:179] * Done! kubectl is now configured to use "no-preload-661954" cluster and "default" namespace by default
	W1002 21:54:31.403472 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:33.903536 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:35.903696 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:37.903870 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	I1002 21:54:38.404715 1189833 node_ready.go:49] node "embed-certs-132977" is "Ready"
	I1002 21:54:38.404740 1189833 node_ready.go:38] duration metric: took 40.504339879s for node "embed-certs-132977" to be "Ready" ...
	I1002 21:54:38.404753 1189833 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:54:38.404814 1189833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:54:38.433291 1189833 api_server.go:72] duration metric: took 41.997752118s to wait for apiserver process to appear ...
	I1002 21:54:38.433313 1189833 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:54:38.433332 1189833 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:54:38.445543 1189833 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 21:54:38.446830 1189833 api_server.go:141] control plane version: v1.34.1
	I1002 21:54:38.446852 1189833 api_server.go:131] duration metric: took 13.531475ms to wait for apiserver health ...
	I1002 21:54:38.446860 1189833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:54:38.450727 1189833 system_pods.go:59] 8 kube-system pods found
	I1002 21:54:38.450758 1189833 system_pods.go:61] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:54:38.450765 1189833 system_pods.go:61] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running
	I1002 21:54:38.450771 1189833 system_pods.go:61] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:54:38.450775 1189833 system_pods.go:61] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running
	I1002 21:54:38.450784 1189833 system_pods.go:61] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running
	I1002 21:54:38.450789 1189833 system_pods.go:61] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:54:38.450793 1189833 system_pods.go:61] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running
	I1002 21:54:38.450799 1189833 system_pods.go:61] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:54:38.450805 1189833 system_pods.go:74] duration metric: took 3.939416ms to wait for pod list to return data ...
	I1002 21:54:38.450813 1189833 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:54:38.454525 1189833 default_sa.go:45] found service account: "default"
	I1002 21:54:38.454544 1189833 default_sa.go:55] duration metric: took 3.725851ms for default service account to be created ...
	I1002 21:54:38.454554 1189833 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:54:38.457911 1189833 system_pods.go:86] 8 kube-system pods found
	I1002 21:54:38.457941 1189833 system_pods.go:89] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:54:38.457949 1189833 system_pods.go:89] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running
	I1002 21:54:38.457955 1189833 system_pods.go:89] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:54:38.457959 1189833 system_pods.go:89] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running
	I1002 21:54:38.457964 1189833 system_pods.go:89] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running
	I1002 21:54:38.457968 1189833 system_pods.go:89] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:54:38.457971 1189833 system_pods.go:89] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running
	I1002 21:54:38.457977 1189833 system_pods.go:89] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:54:38.457997 1189833 retry.go:31] will retry after 282.68274ms: missing components: kube-dns
	I1002 21:54:38.745579 1189833 system_pods.go:86] 8 kube-system pods found
	I1002 21:54:38.745667 1189833 system_pods.go:89] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:54:38.745700 1189833 system_pods.go:89] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running
	I1002 21:54:38.745714 1189833 system_pods.go:89] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:54:38.745720 1189833 system_pods.go:89] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running
	I1002 21:54:38.745725 1189833 system_pods.go:89] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running
	I1002 21:54:38.745730 1189833 system_pods.go:89] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:54:38.745734 1189833 system_pods.go:89] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running
	I1002 21:54:38.745740 1189833 system_pods.go:89] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:54:38.745759 1189833 retry.go:31] will retry after 289.646816ms: missing components: kube-dns
	I1002 21:54:39.039529 1189833 system_pods.go:86] 8 kube-system pods found
	I1002 21:54:39.039556 1189833 system_pods.go:89] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:54:39.039562 1189833 system_pods.go:89] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running
	I1002 21:54:39.039569 1189833 system_pods.go:89] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:54:39.039573 1189833 system_pods.go:89] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running
	I1002 21:54:39.039578 1189833 system_pods.go:89] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running
	I1002 21:54:39.039581 1189833 system_pods.go:89] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:54:39.039585 1189833 system_pods.go:89] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running
	I1002 21:54:39.039591 1189833 system_pods.go:89] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:54:39.039605 1189833 retry.go:31] will retry after 417.217485ms: missing components: kube-dns
	I1002 21:54:39.461452 1189833 system_pods.go:86] 8 kube-system pods found
	I1002 21:54:39.461501 1189833 system_pods.go:89] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Running
	I1002 21:54:39.461509 1189833 system_pods.go:89] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running
	I1002 21:54:39.461513 1189833 system_pods.go:89] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:54:39.461518 1189833 system_pods.go:89] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running
	I1002 21:54:39.461541 1189833 system_pods.go:89] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running
	I1002 21:54:39.461554 1189833 system_pods.go:89] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:54:39.461573 1189833 system_pods.go:89] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running
	I1002 21:54:39.461584 1189833 system_pods.go:89] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Running
	I1002 21:54:39.461594 1189833 system_pods.go:126] duration metric: took 1.007033707s to wait for k8s-apps to be running ...
	I1002 21:54:39.461604 1189833 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:54:39.461671 1189833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:54:39.478598 1189833 system_svc.go:56] duration metric: took 16.985989ms WaitForService to wait for kubelet
	I1002 21:54:39.478670 1189833 kubeadm.go:586] duration metric: took 43.043135125s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:54:39.478704 1189833 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:54:39.482160 1189833 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:54:39.482195 1189833 node_conditions.go:123] node cpu capacity is 2
	I1002 21:54:39.482210 1189833 node_conditions.go:105] duration metric: took 3.499272ms to run NodePressure ...
	I1002 21:54:39.482223 1189833 start.go:242] waiting for startup goroutines ...
	I1002 21:54:39.482230 1189833 start.go:247] waiting for cluster config update ...
	I1002 21:54:39.482242 1189833 start.go:256] writing updated cluster config ...
	I1002 21:54:39.482538 1189833 ssh_runner.go:195] Run: rm -f paused
	I1002 21:54:39.486611 1189833 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:54:39.561128 1189833 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rl5vq" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.566358 1189833 pod_ready.go:94] pod "coredns-66bc5c9577-rl5vq" is "Ready"
	I1002 21:54:39.566389 1189833 pod_ready.go:86] duration metric: took 5.230919ms for pod "coredns-66bc5c9577-rl5vq" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.568792 1189833 pod_ready.go:83] waiting for pod "etcd-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.573727 1189833 pod_ready.go:94] pod "etcd-embed-certs-132977" is "Ready"
	I1002 21:54:39.573755 1189833 pod_ready.go:86] duration metric: took 4.934738ms for pod "etcd-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.576177 1189833 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.580896 1189833 pod_ready.go:94] pod "kube-apiserver-embed-certs-132977" is "Ready"
	I1002 21:54:39.580922 1189833 pod_ready.go:86] duration metric: took 4.714781ms for pod "kube-apiserver-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.583217 1189833 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.892636 1189833 pod_ready.go:94] pod "kube-controller-manager-embed-certs-132977" is "Ready"
	I1002 21:54:39.892665 1189833 pod_ready.go:86] duration metric: took 309.422099ms for pod "kube-controller-manager-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:40.096570 1189833 pod_ready.go:83] waiting for pod "kube-proxy-rslfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:40.492183 1189833 pod_ready.go:94] pod "kube-proxy-rslfw" is "Ready"
	I1002 21:54:40.492212 1189833 pod_ready.go:86] duration metric: took 395.615555ms for pod "kube-proxy-rslfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:40.692648 1189833 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:41.092952 1189833 pod_ready.go:94] pod "kube-scheduler-embed-certs-132977" is "Ready"
	I1002 21:54:41.092979 1189833 pod_ready.go:86] duration metric: took 400.302152ms for pod "kube-scheduler-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:41.092991 1189833 pod_ready.go:40] duration metric: took 1.606349041s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:54:41.150287 1189833 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:54:41.156763 1189833 out.go:179] * Done! kubectl is now configured to use "embed-certs-132977" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 21:54:28 no-preload-661954 crio[647]: time="2025-10-02T21:54:28.87296793Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:54:28 no-preload-661954 crio[647]: time="2025-10-02T21:54:28.883796371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:54:28 no-preload-661954 crio[647]: time="2025-10-02T21:54:28.884331987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:54:28 no-preload-661954 crio[647]: time="2025-10-02T21:54:28.899401063Z" level=info msg="Created container 859aca45b87df60af8c0060e3cca60e3bd6f08c498629d13156197b0ebed33c1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc/dashboard-metrics-scraper" id=76f3d10d-3f42-4d32-b1c6-5dadb86f826e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:54:28 no-preload-661954 crio[647]: time="2025-10-02T21:54:28.900100024Z" level=info msg="Starting container: 859aca45b87df60af8c0060e3cca60e3bd6f08c498629d13156197b0ebed33c1" id=46e36e6b-dc4e-4558-a217-3f376742e0cf name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:54:28 no-preload-661954 conmon[1626]: conmon 859aca45b87df60af8c0 <ninfo>: container 1628 exited with status 1
	Oct 02 21:54:28 no-preload-661954 crio[647]: time="2025-10-02T21:54:28.906412785Z" level=info msg="Started container" PID=1628 containerID=859aca45b87df60af8c0060e3cca60e3bd6f08c498629d13156197b0ebed33c1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc/dashboard-metrics-scraper id=46e36e6b-dc4e-4558-a217-3f376742e0cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=b489017ce62851f397774df3e66e0024acb17f6620638b576384427bbfc11ede
	Oct 02 21:54:29 no-preload-661954 crio[647]: time="2025-10-02T21:54:29.199455539Z" level=info msg="Removing container: 2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85" id=1398e49a-0b9b-47f7-8df1-b58cbf53bfe7 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 21:54:29 no-preload-661954 crio[647]: time="2025-10-02T21:54:29.206322369Z" level=info msg="Error loading conmon cgroup of container 2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85: cgroup deleted" id=1398e49a-0b9b-47f7-8df1-b58cbf53bfe7 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 21:54:29 no-preload-661954 crio[647]: time="2025-10-02T21:54:29.209119949Z" level=info msg="Removed container 2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc/dashboard-metrics-scraper" id=1398e49a-0b9b-47f7-8df1-b58cbf53bfe7 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.713412716Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.721033569Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.721067554Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.721091086Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.724147571Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.724180998Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.724206606Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.727310614Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.72734519Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.727367688Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.73013248Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.730163511Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.730188044Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.733076272Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.73310878Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	859aca45b87df       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   b489017ce6285       dashboard-metrics-scraper-6ffb444bf9-fb9gc   kubernetes-dashboard
	032a3b41ed0c4       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           24 seconds ago       Running             storage-provisioner         2                   0d90154fe652d       storage-provisioner                          kube-system
	7e7d0b2884e0b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   d5b420ffa7191       kubernetes-dashboard-855c9754f9-mmbrz        kubernetes-dashboard
	f633fcfe67ab1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   5201dea46c850       busybox                                      default
	4e0dc14637932       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           55 seconds ago       Exited              storage-provisioner         1                   0d90154fe652d       storage-provisioner                          kube-system
	6b82861a8945a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           56 seconds ago       Running             coredns                     1                   11fd8d911b948       coredns-66bc5c9577-ddsr2                     kube-system
	9e3cec57132b7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   e76d327c93aef       kindnet-flmgm                                kube-system
	a6ab31d1759e6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           56 seconds ago       Running             kube-proxy                  1                   e8aa4a619b440       kube-proxy-5jstv                             kube-system
	3cf04b502d36e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   5796b45e7007a       kube-apiserver-no-preload-661954             kube-system
	c31f86dc038a7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   58ee2f7b0e671       etcd-no-preload-661954                       kube-system
	88076a11fa43f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   2da8a0fde382d       kube-controller-manager-no-preload-661954    kube-system
	5cd95915db618       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   90b8b0bff4f83       kube-scheduler-no-preload-661954             kube-system
	
	
	==> coredns [6b82861a8945a6d58ec459cbce94b85d54a3a5234cc6ba7d3d096a78eb01fdee] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55879 - 14096 "HINFO IN 1947180028123946014.2018825497004255907. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003846824s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-661954
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-661954
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=no-preload-661954
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_52_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:52:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-661954
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:54:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:54:22 +0000   Thu, 02 Oct 2025 21:52:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:54:22 +0000   Thu, 02 Oct 2025 21:52:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:54:22 +0000   Thu, 02 Oct 2025 21:52:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:54:22 +0000   Thu, 02 Oct 2025 21:53:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-661954
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c032b084c984865bf1543fa4546a69b
	  System UUID:                a884495e-b86e-4c01-a759-33d7d494f01d
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-ddsr2                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     117s
	  kube-system                 etcd-no-preload-661954                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-flmgm                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      117s
	  kube-system                 kube-apiserver-no-preload-661954              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-no-preload-661954     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-5jstv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-no-preload-661954              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fb9gc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mmbrz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 115s               kube-proxy       
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m2s               kubelet          Node no-preload-661954 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m2s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m2s               kubelet          Node no-preload-661954 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m2s               kubelet          Node no-preload-661954 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m2s               kubelet          Starting kubelet.
	  Normal   RegisteredNode           118s               node-controller  Node no-preload-661954 event: Registered Node no-preload-661954 in Controller
	  Normal   NodeReady                103s               kubelet          Node no-preload-661954 status is now: NodeReady
	  Normal   Starting                 66s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x9 over 66s)  kubelet          Node no-preload-661954 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)  kubelet          Node no-preload-661954 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x7 over 66s)  kubelet          Node no-preload-661954 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                node-controller  Node no-preload-661954 event: Registered Node no-preload-661954 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:18] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:21] overlayfs: idmapped layers are currently not supported
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +39.734560] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[  +6.128952] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[  +5.098616] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c31f86dc038a7a8cda44b794e533fc5838b3adc20907a05388b6afdffa5ec270] <==
	{"level":"warn","ts":"2025-10-02T21:53:49.146566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.170244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.211489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.257881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.285531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.312754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.344099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.373238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.434458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.542820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.544659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.564582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.604901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.638463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.684056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.702243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.728483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.762640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.805396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.882418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.928339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.959326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.990268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:50.032336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:50.196252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45910","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:54:49 up  6:37,  0 user,  load average: 4.23, 3.11, 2.09
	Linux no-preload-661954 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9e3cec57132b725065e74b658befc5de805ca717fa3dc565174c378bb7fcc9c5] <==
	I1002 21:53:53.510629       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:53:53.511077       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 21:53:53.511252       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:53:53.511289       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:53:53.511300       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:53:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:53:53.728423       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:53:53.728448       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:53:53.728459       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:53:53.728573       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:54:23.714148       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:54:23.714411       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 21:54:23.714634       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 21:54:23.728004       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 21:54:25.229657       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:54:25.229752       1 metrics.go:72] Registering metrics
	I1002 21:54:25.229847       1 controller.go:711] "Syncing nftables rules"
	I1002 21:54:33.713068       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:54:33.713142       1 main.go:301] handling current node
	I1002 21:54:43.717388       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:54:43.717437       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3cf04b502d36e04bbf37deadf1e0009a85fad1306e14b2a672afa8633ea20e98] <==
	I1002 21:53:52.276272       1 policy_source.go:240] refreshing policies
	I1002 21:53:52.290460       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 21:53:52.290483       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 21:53:52.303578       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:53:52.305058       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 21:53:52.305121       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 21:53:52.307244       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 21:53:52.319991       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:53:52.332030       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 21:53:52.332089       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:53:52.358106       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 21:53:52.358141       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 21:53:52.368139       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1002 21:53:52.414271       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 21:53:52.785857       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:53:52.898435       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:53:54.008251       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 21:53:54.158228       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:53:54.253283       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:53:54.282538       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:53:54.581841       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.26.45"}
	I1002 21:53:54.606895       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.108.131"}
	I1002 21:53:56.220589       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 21:53:56.661970       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:53:56.807242       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [88076a11fa43f4493d4db1e7eaa62ad0cda2f161d8d48715aea30b33259edee2] <==
	I1002 21:53:56.250872       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 21:53:56.251451       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 21:53:56.251756       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 21:53:56.252000       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 21:53:56.252033       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 21:53:56.255833       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 21:53:56.266671       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 21:53:56.268025       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:53:56.273292       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 21:53:56.275732       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 21:53:56.282261       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 21:53:56.282443       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 21:53:56.282493       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 21:53:56.282522       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 21:53:56.282549       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 21:53:56.291574       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 21:53:56.296944       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:53:56.300468       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 21:53:56.300889       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 21:53:56.301004       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-661954"
	I1002 21:53:56.301080       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 21:53:56.300565       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 21:53:56.312438       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:53:56.312505       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:53:56.312536       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [a6ab31d1759e69dc797d55b97650619bcf6b2ffed03ceade3ad78af7a9ef9788] <==
	I1002 21:53:54.574541       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:53:54.679484       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:53:54.780072       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:53:54.780107       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 21:53:54.780192       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:53:54.813063       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:53:54.813133       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:53:54.824523       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:53:54.826013       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:53:54.826061       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:53:54.831665       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:53:54.831689       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:53:54.832016       1 config.go:200] "Starting service config controller"
	I1002 21:53:54.832032       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:53:54.832337       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:53:54.832351       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:53:54.835447       1 config.go:309] "Starting node config controller"
	I1002 21:53:54.835972       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:53:54.836336       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:53:54.933822       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 21:53:54.937257       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:53:54.937331       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5cd95915db6182c9baece2f44a4bb2de93f9d4189ac588470b8045a3fc361f08] <==
	I1002 21:53:49.306993       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:53:54.286646       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:53:54.286748       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:53:54.312850       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 21:53:54.316723       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 21:53:54.316669       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:53:54.317557       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:53:54.316697       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:53:54.317934       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:53:54.318936       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:53:54.329191       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:53:54.417711       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:53:54.417795       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 21:53:54.418558       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 02 21:53:57 no-preload-661954 kubelet[768]: I1002 21:53:57.006298     768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/681a294b-e922-4417-b18c-432c106b166b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fb9gc\" (UID: \"681a294b-e922-4417-b18c-432c106b166b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc"
	Oct 02 21:53:57 no-preload-661954 kubelet[768]: I1002 21:53:57.006366     768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjqdz\" (UniqueName: \"kubernetes.io/projected/5828d24d-1b7f-4b37-8eda-0cb1ec554c80-kube-api-access-kjqdz\") pod \"kubernetes-dashboard-855c9754f9-mmbrz\" (UID: \"5828d24d-1b7f-4b37-8eda-0cb1ec554c80\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mmbrz"
	Oct 02 21:53:57 no-preload-661954 kubelet[768]: I1002 21:53:57.006390     768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8m7w\" (UniqueName: \"kubernetes.io/projected/681a294b-e922-4417-b18c-432c106b166b-kube-api-access-g8m7w\") pod \"dashboard-metrics-scraper-6ffb444bf9-fb9gc\" (UID: \"681a294b-e922-4417-b18c-432c106b166b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc"
	Oct 02 21:53:57 no-preload-661954 kubelet[768]: I1002 21:53:57.006414     768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5828d24d-1b7f-4b37-8eda-0cb1ec554c80-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-mmbrz\" (UID: \"5828d24d-1b7f-4b37-8eda-0cb1ec554c80\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mmbrz"
	Oct 02 21:53:57 no-preload-661954 kubelet[768]: W1002 21:53:57.477839     768 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135/crio-d5b420ffa7191ecdae9148603a68d94088d914d503339c59451a706b233a2b96 WatchSource:0}: Error finding container d5b420ffa7191ecdae9148603a68d94088d914d503339c59451a706b233a2b96: Status 404 returned error can't find the container with id d5b420ffa7191ecdae9148603a68d94088d914d503339c59451a706b233a2b96
	Oct 02 21:54:03 no-preload-661954 kubelet[768]: I1002 21:54:03.020582     768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 21:54:03 no-preload-661954 kubelet[768]: I1002 21:54:03.500642     768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mmbrz" podStartSLOduration=2.158877145 podStartE2EDuration="7.500622314s" podCreationTimestamp="2025-10-02 21:53:56 +0000 UTC" firstStartedPulling="2025-10-02 21:53:57.490232127 +0000 UTC m=+13.960224190" lastFinishedPulling="2025-10-02 21:54:02.831977206 +0000 UTC m=+19.301969359" observedRunningTime="2025-10-02 21:54:03.140416466 +0000 UTC m=+19.610408537" watchObservedRunningTime="2025-10-02 21:54:03.500622314 +0000 UTC m=+19.970614385"
	Oct 02 21:54:08 no-preload-661954 kubelet[768]: I1002 21:54:08.137727     768 scope.go:117] "RemoveContainer" containerID="58151bc543a9802af4bc7fc73cc143b6f28db9beb577f14c1abfc9b46ae10186"
	Oct 02 21:54:09 no-preload-661954 kubelet[768]: I1002 21:54:09.142897     768 scope.go:117] "RemoveContainer" containerID="58151bc543a9802af4bc7fc73cc143b6f28db9beb577f14c1abfc9b46ae10186"
	Oct 02 21:54:09 no-preload-661954 kubelet[768]: I1002 21:54:09.143511     768 scope.go:117] "RemoveContainer" containerID="2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85"
	Oct 02 21:54:09 no-preload-661954 kubelet[768]: E1002 21:54:09.143702     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb9gc_kubernetes-dashboard(681a294b-e922-4417-b18c-432c106b166b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc" podUID="681a294b-e922-4417-b18c-432c106b166b"
	Oct 02 21:54:10 no-preload-661954 kubelet[768]: I1002 21:54:10.147058     768 scope.go:117] "RemoveContainer" containerID="2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85"
	Oct 02 21:54:10 no-preload-661954 kubelet[768]: E1002 21:54:10.147217     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb9gc_kubernetes-dashboard(681a294b-e922-4417-b18c-432c106b166b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc" podUID="681a294b-e922-4417-b18c-432c106b166b"
	Oct 02 21:54:17 no-preload-661954 kubelet[768]: I1002 21:54:17.425685     768 scope.go:117] "RemoveContainer" containerID="2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85"
	Oct 02 21:54:17 no-preload-661954 kubelet[768]: E1002 21:54:17.425890     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb9gc_kubernetes-dashboard(681a294b-e922-4417-b18c-432c106b166b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc" podUID="681a294b-e922-4417-b18c-432c106b166b"
	Oct 02 21:54:25 no-preload-661954 kubelet[768]: I1002 21:54:25.184624     768 scope.go:117] "RemoveContainer" containerID="4e0dc1463793285417d2b9579a95d8747376988c14fdab1646a7485e46504495"
	Oct 02 21:54:28 no-preload-661954 kubelet[768]: I1002 21:54:28.869945     768 scope.go:117] "RemoveContainer" containerID="2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85"
	Oct 02 21:54:29 no-preload-661954 kubelet[768]: I1002 21:54:29.198278     768 scope.go:117] "RemoveContainer" containerID="2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85"
	Oct 02 21:54:30 no-preload-661954 kubelet[768]: I1002 21:54:30.201862     768 scope.go:117] "RemoveContainer" containerID="859aca45b87df60af8c0060e3cca60e3bd6f08c498629d13156197b0ebed33c1"
	Oct 02 21:54:30 no-preload-661954 kubelet[768]: E1002 21:54:30.202025     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb9gc_kubernetes-dashboard(681a294b-e922-4417-b18c-432c106b166b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc" podUID="681a294b-e922-4417-b18c-432c106b166b"
	Oct 02 21:54:37 no-preload-661954 kubelet[768]: I1002 21:54:37.426088     768 scope.go:117] "RemoveContainer" containerID="859aca45b87df60af8c0060e3cca60e3bd6f08c498629d13156197b0ebed33c1"
	Oct 02 21:54:37 no-preload-661954 kubelet[768]: E1002 21:54:37.426285     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb9gc_kubernetes-dashboard(681a294b-e922-4417-b18c-432c106b166b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc" podUID="681a294b-e922-4417-b18c-432c106b166b"
	Oct 02 21:54:46 no-preload-661954 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 21:54:46 no-preload-661954 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 21:54:46 no-preload-661954 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [7e7d0b2884e0b1793d47f40952ea30e017bf6f19ac40af2b25670184e8f23167] <==
	2025/10/02 21:54:02 Using namespace: kubernetes-dashboard
	2025/10/02 21:54:02 Using in-cluster config to connect to apiserver
	2025/10/02 21:54:02 Using secret token for csrf signing
	2025/10/02 21:54:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 21:54:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 21:54:02 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 21:54:02 Generating JWE encryption key
	2025/10/02 21:54:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 21:54:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 21:54:03 Initializing JWE encryption key from synchronized object
	2025/10/02 21:54:03 Creating in-cluster Sidecar client
	2025/10/02 21:54:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 21:54:03 Serving insecurely on HTTP port: 9090
	2025/10/02 21:54:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 21:54:02 Starting overwatch
	
	
	==> storage-provisioner [032a3b41ed0c467e55383a336d6fd7f6f244fd085545de3a0e761d76b74d86f8] <==
	I1002 21:54:25.252113       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 21:54:25.266024       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 21:54:25.267919       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 21:54:25.272281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:28.727059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:32.987551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:36.586163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:39.639389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:42.661725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:42.669171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:54:42.669335       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:54:42.669519       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-661954_38ec46aa-f74b-4899-bb1b-a8dcf6f30298!
	I1002 21:54:42.670230       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"93c0aec5-6a68-4ee7-97c4-954139f85db0", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-661954_38ec46aa-f74b-4899-bb1b-a8dcf6f30298 became leader
	W1002 21:54:42.672213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:42.679195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:54:42.770579       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-661954_38ec46aa-f74b-4899-bb1b-a8dcf6f30298!
	W1002 21:54:44.682024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:44.689134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:46.692684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:46.701225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:48.705704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:48.714309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [4e0dc1463793285417d2b9579a95d8747376988c14fdab1646a7485e46504495] <==
	I1002 21:53:54.194016       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 21:54:24.215739       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-661954 -n no-preload-661954
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-661954 -n no-preload-661954: exit status 2 (422.612589ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-661954 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-661954
helpers_test.go:243: (dbg) docker inspect no-preload-661954:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135",
	        "Created": "2025-10-02T21:52:01.48084196Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1192593,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:53:35.269560752Z",
	            "FinishedAt": "2025-10-02T21:53:34.266431559Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135/hostname",
	        "HostsPath": "/var/lib/docker/containers/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135/hosts",
	        "LogPath": "/var/lib/docker/containers/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135-json.log",
	        "Name": "/no-preload-661954",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-661954:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-661954",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135",
	                "LowerDir": "/var/lib/docker/overlay2/79243a034d5e9dc6634af037ff52f641df0196883b405891b543a79ac513554b-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/79243a034d5e9dc6634af037ff52f641df0196883b405891b543a79ac513554b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/79243a034d5e9dc6634af037ff52f641df0196883b405891b543a79ac513554b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/79243a034d5e9dc6634af037ff52f641df0196883b405891b543a79ac513554b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-661954",
	                "Source": "/var/lib/docker/volumes/no-preload-661954/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-661954",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-661954",
	                "name.minikube.sigs.k8s.io": "no-preload-661954",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "201ec57258dfe58af14e6fd9c40093c8a8b69c803ac59f23c32654cb394f3949",
	            "SandboxKey": "/var/run/docker/netns/201ec57258df",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34201"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34202"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34205"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34203"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34204"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-661954": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:90:22:5f:aa:53",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "76a20fa3488da5dc1336dc065545b6cee383a338650fbc63c52f9a29c8b4abb9",
	                    "EndpointID": "dda2c79e86654c345a475640388dcb96c091457be23a942ef3efd12ab524e0c4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-661954",
	                        "f3d778675684"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-661954 -n no-preload-661954
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-661954 -n no-preload-661954: exit status 2 (556.617218ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-661954 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-661954 logs -n 25: (1.642953647s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ delete  │ -p force-systemd-env-916563                                                                                                                                                                                                                   │ force-systemd-env-916563 │ jenkins │ v1.37.0 │ 02 Oct 25 21:49 UTC │ 02 Oct 25 21:49 UTC │
	│ start   │ -p cert-options-769461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-769461      │ jenkins │ v1.37.0 │ 02 Oct 25 21:49 UTC │ 02 Oct 25 21:50 UTC │
	│ ssh     │ cert-options-769461 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-769461      │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ ssh     │ -p cert-options-769461 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-769461      │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ delete  │ -p cert-options-769461                                                                                                                                                                                                                        │ cert-options-769461      │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ start   │ -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p cert-expiration-955864 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-955864   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-714101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │                     │
	│ stop    │ -p old-k8s-version-714101 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-714101 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:52 UTC │
	│ delete  │ -p cert-expiration-955864                                                                                                                                                                                                                     │ cert-expiration-955864   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:53 UTC │
	│ image   │ old-k8s-version-714101 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ pause   │ -p old-k8s-version-714101 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	│ delete  │ -p old-k8s-version-714101                                                                                                                                                                                                                     │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ delete  │ -p old-k8s-version-714101                                                                                                                                                                                                                     │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977       │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-661954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	│ stop    │ -p no-preload-661954 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ addons  │ enable dashboard -p no-preload-661954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ start   │ -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:54 UTC │
	│ image   │ no-preload-661954 image list --format=json                                                                                                                                                                                                    │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ pause   │ -p no-preload-661954 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-132977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-132977       │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:53:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:53:34.901160 1192467 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:53:34.901344 1192467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:53:34.901354 1192467 out.go:374] Setting ErrFile to fd 2...
	I1002 21:53:34.901359 1192467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:53:34.901603 1192467 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:53:34.901971 1192467 out.go:368] Setting JSON to false
	I1002 21:53:34.902943 1192467 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23752,"bootTime":1759418263,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:53:34.903008 1192467 start.go:140] virtualization:  
	I1002 21:53:34.906123 1192467 out.go:179] * [no-preload-661954] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:53:34.909913 1192467 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:53:34.909979 1192467 notify.go:221] Checking for updates...
	I1002 21:53:34.915971 1192467 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:53:34.918955 1192467 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:53:34.921858 1192467 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:53:34.925522 1192467 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:53:34.928444 1192467 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:53:34.931934 1192467 config.go:182] Loaded profile config "no-preload-661954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:53:34.932583 1192467 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:53:34.971588 1192467 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:53:34.971693 1192467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:53:35.062471 1192467 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:53:35.050304835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:53:35.062582 1192467 docker.go:319] overlay module found
	I1002 21:53:35.065722 1192467 out.go:179] * Using the docker driver based on existing profile
	I1002 21:53:30.791545 1189833 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:53:31.561972 1189833 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:53:31.562140 1189833 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-132977 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 21:53:32.528147 1189833 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:53:32.528501 1189833 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-132977 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 21:53:33.148400 1189833 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:53:33.396421 1189833 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:53:33.791661 1189833 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:53:33.792035 1189833 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:53:34.284468 1189833 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:53:35.068509 1192467 start.go:306] selected driver: docker
	I1002 21:53:35.068525 1192467 start.go:936] validating driver "docker" against &{Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:53:35.068625 1192467 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:53:35.069310 1192467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:53:35.175126 1192467 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:53:35.156629246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:53:35.175468 1192467 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:53:35.175494 1192467 cni.go:84] Creating CNI manager for ""
	I1002 21:53:35.175552 1192467 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:53:35.175584 1192467 start.go:350] cluster config:
	{Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:53:35.178968 1192467 out.go:179] * Starting "no-preload-661954" primary control-plane node in "no-preload-661954" cluster
	I1002 21:53:35.181760 1192467 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:53:35.184818 1192467 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:53:35.187591 1192467 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:53:35.187755 1192467 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/config.json ...
	I1002 21:53:35.188104 1192467 cache.go:107] acquiring lock: {Name:mk77546a797d48dfa87e4f15444ebfe2ae46de0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188183 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 21:53:35.188191 1192467 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.019µs
	I1002 21:53:35.188203 1192467 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 21:53:35.188217 1192467 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:53:35.188439 1192467 cache.go:107] acquiring lock: {Name:mkb30203224ed1c1a4b88d93d3aeb9a29d46fb20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188507 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1002 21:53:35.188515 1192467 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 80.859µs
	I1002 21:53:35.188521 1192467 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1002 21:53:35.188533 1192467 cache.go:107] acquiring lock: {Name:mk2aab2e3052911889ff3d13b07414606ffa2c8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188567 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1002 21:53:35.188572 1192467 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 41.386µs
	I1002 21:53:35.188578 1192467 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1002 21:53:35.188587 1192467 cache.go:107] acquiring lock: {Name:mkb1bbde6510d7fb66d3923ec81dcf1545e1aeca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188613 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1002 21:53:35.188618 1192467 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 32.73µs
	I1002 21:53:35.188624 1192467 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1002 21:53:35.188633 1192467 cache.go:107] acquiring lock: {Name:mk783e98a1246826a6f16b0bd25f720d93184154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188658 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1002 21:53:35.188663 1192467 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.76µs
	I1002 21:53:35.188676 1192467 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1002 21:53:35.188687 1192467 cache.go:107] acquiring lock: {Name:mk232b04a28dc0f5922a8e36bb60d83a371a69dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188713 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1002 21:53:35.188717 1192467 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 31.606µs
	I1002 21:53:35.188723 1192467 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1002 21:53:35.188732 1192467 cache.go:107] acquiring lock: {Name:mk17c8111e11ff4babf675464dda89dffef8dccd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188757 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1002 21:53:35.188763 1192467 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.401µs
	I1002 21:53:35.188879 1192467 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1002 21:53:35.188898 1192467 cache.go:107] acquiring lock: {Name:mkb9b4c6e229a9543f9236d679c4b53878bc9ce5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188953 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1002 21:53:35.188961 1192467 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 65.787µs
	I1002 21:53:35.188967 1192467 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1002 21:53:35.188974 1192467 cache.go:87] Successfully saved all images to host disk.
	I1002 21:53:35.209172 1192467 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:53:35.209192 1192467 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:53:35.209203 1192467 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:53:35.209225 1192467 start.go:361] acquireMachinesLock for no-preload-661954: {Name:mk6a385b42202eaf12d2e98c4a7f7a9c153c60e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.209273 1192467 start.go:365] duration metric: took 32.262µs to acquireMachinesLock for "no-preload-661954"
	I1002 21:53:35.209292 1192467 start.go:97] Skipping create...Using existing machine configuration
	I1002 21:53:35.209297 1192467 fix.go:55] fixHost starting: 
	I1002 21:53:35.209553 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:35.231660 1192467 fix.go:113] recreateIfNeeded on no-preload-661954: state=Stopped err=<nil>
	W1002 21:53:35.231690 1192467 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 21:53:35.146380 1189833 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:53:35.272785 1189833 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:53:36.887132 1189833 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:53:38.110579 1189833 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:53:38.111916 1189833 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:53:38.114470 1189833 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:53:35.234941 1192467 out.go:252] * Restarting existing docker container for "no-preload-661954" ...
	I1002 21:53:35.235048 1192467 cli_runner.go:164] Run: docker start no-preload-661954
	I1002 21:53:35.619228 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:35.647925 1192467 kic.go:430] container "no-preload-661954" state is running.
	I1002 21:53:35.648332 1192467 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-661954
	I1002 21:53:35.670854 1192467 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/config.json ...
	I1002 21:53:35.671096 1192467 machine.go:93] provisionDockerMachine start ...
	I1002 21:53:35.671161 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:35.703665 1192467 main.go:141] libmachine: Using SSH client type: native
	I1002 21:53:35.703994 1192467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I1002 21:53:35.704006 1192467 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:53:35.704610 1192467 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60724->127.0.0.1:34201: read: connection reset by peer
	I1002 21:53:38.857630 1192467 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-661954
	
	I1002 21:53:38.857699 1192467 ubuntu.go:182] provisioning hostname "no-preload-661954"
	I1002 21:53:38.857794 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:38.878845 1192467 main.go:141] libmachine: Using SSH client type: native
	I1002 21:53:38.879146 1192467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I1002 21:53:38.879163 1192467 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-661954 && echo "no-preload-661954" | sudo tee /etc/hostname
	I1002 21:53:39.021606 1192467 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-661954
	
	I1002 21:53:39.021702 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:39.040144 1192467 main.go:141] libmachine: Using SSH client type: native
	I1002 21:53:39.040465 1192467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I1002 21:53:39.040489 1192467 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-661954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-661954/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-661954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:53:39.174332 1192467 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:53:39.174356 1192467 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:53:39.174381 1192467 ubuntu.go:190] setting up certificates
	I1002 21:53:39.174390 1192467 provision.go:84] configureAuth start
	I1002 21:53:39.174462 1192467 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-661954
	I1002 21:53:39.199440 1192467 provision.go:143] copyHostCerts
	I1002 21:53:39.199504 1192467 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:53:39.199513 1192467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:53:39.199565 1192467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:53:39.199656 1192467 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:53:39.199661 1192467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:53:39.199687 1192467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:53:39.199745 1192467 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:53:39.199749 1192467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:53:39.199783 1192467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:53:39.199839 1192467 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.no-preload-661954 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-661954]
	I1002 21:53:39.732249 1192467 provision.go:177] copyRemoteCerts
	I1002 21:53:39.732321 1192467 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:53:39.732369 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:39.750662 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:39.860304 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:53:39.884742 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 21:53:38.118027 1189833 out.go:252]   - Booting up control plane ...
	I1002 21:53:38.118154 1189833 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:53:38.118243 1189833 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:53:38.118329 1189833 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:53:38.137079 1189833 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:53:38.137300 1189833 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:53:38.146003 1189833 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:53:38.146568 1189833 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:53:38.146815 1189833 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:53:38.276397 1189833 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:53:38.276520 1189833 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:53:38.790399 1189833 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 510.980423ms
	I1002 21:53:38.790928 1189833 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:53:38.791254 1189833 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 21:53:38.791540 1189833 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:53:38.792552 1189833 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:53:39.918483 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:53:39.949909 1192467 provision.go:87] duration metric: took 775.494692ms to configureAuth
	I1002 21:53:39.949940 1192467 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:53:39.950130 1192467 config.go:182] Loaded profile config "no-preload-661954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:53:39.950234 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:39.982165 1192467 main.go:141] libmachine: Using SSH client type: native
	I1002 21:53:39.982524 1192467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I1002 21:53:39.982550 1192467 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:53:40.431478 1192467 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:53:40.431547 1192467 machine.go:96] duration metric: took 4.760440429s to provisionDockerMachine
	I1002 21:53:40.431574 1192467 start.go:294] postStartSetup for "no-preload-661954" (driver="docker")
	I1002 21:53:40.431603 1192467 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:53:40.431723 1192467 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:53:40.431800 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:40.460589 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:40.579352 1192467 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:53:40.582836 1192467 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:53:40.582872 1192467 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:53:40.582883 1192467 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:53:40.582946 1192467 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:53:40.583041 1192467 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:53:40.583155 1192467 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:53:40.591092 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:53:40.616605 1192467 start.go:297] duration metric: took 184.998596ms for postStartSetup
	I1002 21:53:40.616696 1192467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:53:40.616844 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:40.647561 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:40.750782 1192467 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:53:40.756954 1192467 fix.go:57] duration metric: took 5.547649748s for fixHost
	I1002 21:53:40.756981 1192467 start.go:84] releasing machines lock for "no-preload-661954", held for 5.547699282s
	I1002 21:53:40.757047 1192467 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-661954
	I1002 21:53:40.784891 1192467 ssh_runner.go:195] Run: cat /version.json
	I1002 21:53:40.784948 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:40.785190 1192467 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:53:40.785240 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:40.822484 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:40.822940 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:40.937967 1192467 ssh_runner.go:195] Run: systemctl --version
	I1002 21:53:41.061857 1192467 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:53:41.145969 1192467 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:53:41.151940 1192467 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:53:41.152019 1192467 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:53:41.165141 1192467 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:53:41.165186 1192467 start.go:496] detecting cgroup driver to use...
	I1002 21:53:41.165217 1192467 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:53:41.165275 1192467 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:53:41.188391 1192467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:53:41.213237 1192467 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:53:41.213309 1192467 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:53:41.238346 1192467 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:53:41.265240 1192467 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:53:41.496554 1192467 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:53:41.702649 1192467 docker.go:234] disabling docker service ...
	I1002 21:53:41.702738 1192467 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:53:41.723182 1192467 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:53:41.753668 1192467 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:53:41.948226 1192467 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:53:42.192815 1192467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:53:42.223559 1192467 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:53:42.251561 1192467 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:53:42.251654 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.267876 1192467 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:53:42.267981 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.285908 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.301305 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.318315 1192467 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:53:42.332682 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.345212 1192467 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.361714 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.378749 1192467 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:53:42.392058 1192467 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:53:42.404240 1192467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:53:42.610903 1192467 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:53:42.815298 1192467 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:53:42.815393 1192467 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:53:42.821814 1192467 start.go:564] Will wait 60s for crictl version
	I1002 21:53:42.821896 1192467 ssh_runner.go:195] Run: which crictl
	I1002 21:53:42.825340 1192467 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:53:42.877728 1192467 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:53:42.877820 1192467 ssh_runner.go:195] Run: crio --version
	I1002 21:53:42.940328 1192467 ssh_runner.go:195] Run: crio --version
	I1002 21:53:42.988804 1192467 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:53:42.991635 1192467 cli_runner.go:164] Run: docker network inspect no-preload-661954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:53:43.013876 1192467 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 21:53:43.017684 1192467 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:53:43.040354 1192467 kubeadm.go:883] updating cluster {Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:53:43.040474 1192467 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:53:43.040519 1192467 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:53:43.097583 1192467 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:53:43.097609 1192467 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:53:43.097617 1192467 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 21:53:43.097711 1192467 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-661954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:53:43.097796 1192467 ssh_runner.go:195] Run: crio config
	I1002 21:53:43.192119 1192467 cni.go:84] Creating CNI manager for ""
	I1002 21:53:43.192150 1192467 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:53:43.192168 1192467 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:53:43.192204 1192467 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-661954 NodeName:no-preload-661954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:53:43.192338 1192467 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-661954"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:53:43.192434 1192467 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:53:43.205178 1192467 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:53:43.205246 1192467 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:53:43.215550 1192467 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 21:53:43.239441 1192467 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:53:43.262457 1192467 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1002 21:53:43.284544 1192467 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:53:43.293407 1192467 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:53:43.307245 1192467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:53:43.506121 1192467 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:53:43.523524 1192467 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954 for IP: 192.168.85.2
	I1002 21:53:43.523545 1192467 certs.go:195] generating shared ca certs ...
	I1002 21:53:43.523561 1192467 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:43.523728 1192467 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:53:43.523791 1192467 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:53:43.523803 1192467 certs.go:257] generating profile certs ...
	I1002 21:53:43.523918 1192467 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.key
	I1002 21:53:43.523983 1192467 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key.ffe6e5b4
	I1002 21:53:43.524026 1192467 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.key
	I1002 21:53:43.524152 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:53:43.524198 1192467 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:53:43.524211 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:53:43.524234 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:53:43.524263 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:53:43.524302 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:53:43.524359 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:53:43.525092 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:53:43.586699 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:53:43.621808 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:53:43.686543 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:53:43.735499 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:53:43.762086 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:53:43.794944 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:53:43.868155 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:53:43.923880 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:53:43.951493 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:53:43.988190 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:53:44.019229 1192467 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:53:44.046658 1192467 ssh_runner.go:195] Run: openssl version
	I1002 21:53:44.053428 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:53:44.063274 1192467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:53:44.070507 1192467 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:53:44.070596 1192467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:53:44.113319 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:53:44.122171 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:53:44.131226 1192467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:53:44.136447 1192467 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:53:44.136521 1192467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:53:44.182170 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:53:44.195482 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:53:44.207002 1192467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:53:44.211690 1192467 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:53:44.211780 1192467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:53:44.256193 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:53:44.264627 1192467 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:53:44.268830 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:53:44.317092 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:53:44.387292 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:53:44.526916 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:53:44.730899 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:53:44.894226 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:53:45.002059 1192467 kubeadm.go:400] StartCluster: {Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:53:45.002171 1192467 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:53:45.002259 1192467 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:53:45.087963 1192467 cri.go:89] found id: "3cf04b502d36e04bbf37deadf1e0009a85fad1306e14b2a672afa8633ea20e98"
	I1002 21:53:45.088014 1192467 cri.go:89] found id: "c31f86dc038a7a8cda44b794e533fc5838b3adc20907a05388b6afdffa5ec270"
	I1002 21:53:45.088021 1192467 cri.go:89] found id: "88076a11fa43f4493d4db1e7eaa62ad0cda2f161d8d48715aea30b33259edee2"
	I1002 21:53:45.088025 1192467 cri.go:89] found id: "5cd95915db6182c9baece2f44a4bb2de93f9d4189ac588470b8045a3fc361f08"
	I1002 21:53:45.088037 1192467 cri.go:89] found id: ""
	I1002 21:53:45.088116 1192467 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 21:53:45.106135 1192467 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:53:45Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:53:45.106285 1192467 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:53:45.127163 1192467 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:53:45.127205 1192467 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:53:45.127315 1192467 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:53:45.145179 1192467 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:53:45.145726 1192467 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-661954" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:53:45.145883 1192467 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-992084/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-661954" cluster setting kubeconfig missing "no-preload-661954" context setting]
	I1002 21:53:45.146372 1192467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:45.148353 1192467 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:53:45.171240 1192467 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 21:53:45.171280 1192467 kubeadm.go:601] duration metric: took 44.0584ms to restartPrimaryControlPlane
	I1002 21:53:45.171301 1192467 kubeadm.go:402] duration metric: took 169.276623ms to StartCluster
	I1002 21:53:45.171317 1192467 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:45.171405 1192467 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:53:45.172141 1192467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:45.172397 1192467 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:53:45.172731 1192467 config.go:182] Loaded profile config "no-preload-661954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:53:45.172795 1192467 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:53:45.172941 1192467 addons.go:69] Setting storage-provisioner=true in profile "no-preload-661954"
	I1002 21:53:45.172962 1192467 addons.go:238] Setting addon storage-provisioner=true in "no-preload-661954"
	W1002 21:53:45.172971 1192467 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:53:45.172986 1192467 addons.go:69] Setting dashboard=true in profile "no-preload-661954"
	I1002 21:53:45.173070 1192467 addons.go:238] Setting addon dashboard=true in "no-preload-661954"
	W1002 21:53:45.173108 1192467 addons.go:247] addon dashboard should already be in state true
	I1002 21:53:45.173158 1192467 host.go:66] Checking if "no-preload-661954" exists ...
	I1002 21:53:45.172993 1192467 host.go:66] Checking if "no-preload-661954" exists ...
	I1002 21:53:45.173802 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:45.173831 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:45.173011 1192467 addons.go:69] Setting default-storageclass=true in profile "no-preload-661954"
	I1002 21:53:45.174417 1192467 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-661954"
	I1002 21:53:45.174758 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:45.178130 1192467 out.go:179] * Verifying Kubernetes components...
	I1002 21:53:45.184246 1192467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:53:45.223050 1192467 addons.go:238] Setting addon default-storageclass=true in "no-preload-661954"
	W1002 21:53:45.223076 1192467 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:53:45.223104 1192467 host.go:66] Checking if "no-preload-661954" exists ...
	I1002 21:53:45.223578 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:45.253512 1192467 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:53:45.256789 1192467 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:53:45.256819 1192467 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:53:45.256917 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:45.265209 1192467 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 21:53:45.272097 1192467 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 21:53:47.251888 1189833 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.458509189s
	I1002 21:53:48.346662 1189833 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 9.553935935s
	I1002 21:53:49.793477 1189833 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 11.001693502s
	I1002 21:53:49.824620 1189833 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:53:49.862218 1189833 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:53:49.880328 1189833 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:53:49.880555 1189833 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-132977 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 21:53:49.899333 1189833 kubeadm.go:318] [bootstrap-token] Using token: 21plum.6l6cs3s9kwcorv4m
	I1002 21:53:45.275305 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 21:53:45.275342 1192467 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 21:53:45.275420 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:45.296741 1192467 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:53:45.296773 1192467 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:53:45.296850 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:45.321320 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:45.368395 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:45.374254 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:45.730839 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 21:53:45.730866 1192467 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 21:53:45.784541 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 21:53:45.784569 1192467 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 21:53:45.855767 1192467 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:53:45.865965 1192467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:53:45.869532 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 21:53:45.869557 1192467 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 21:53:45.885049 1192467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:53:45.961157 1192467 node_ready.go:35] waiting up to 6m0s for node "no-preload-661954" to be "Ready" ...
	I1002 21:53:45.963075 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 21:53:45.963130 1192467 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 21:53:46.130992 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 21:53:46.131067 1192467 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 21:53:46.259505 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 21:53:46.259570 1192467 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 21:53:46.362568 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 21:53:46.362647 1192467 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 21:53:46.404379 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 21:53:46.404444 1192467 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 21:53:46.439621 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:53:46.439701 1192467 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 21:53:46.463673 1192467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:53:49.902382 1189833 out.go:252]   - Configuring RBAC rules ...
	I1002 21:53:49.902508 1189833 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:53:49.914154 1189833 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:53:49.922679 1189833 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:53:49.927109 1189833 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:53:49.931429 1189833 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:53:49.936135 1189833 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:53:50.201066 1189833 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:53:50.702788 1189833 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 21:53:51.203959 1189833 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 21:53:51.205373 1189833 kubeadm.go:318] 
	I1002 21:53:51.205451 1189833 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 21:53:51.205465 1189833 kubeadm.go:318] 
	I1002 21:53:51.205549 1189833 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 21:53:51.205558 1189833 kubeadm.go:318] 
	I1002 21:53:51.205585 1189833 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 21:53:51.205645 1189833 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:53:51.205701 1189833 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:53:51.205710 1189833 kubeadm.go:318] 
	I1002 21:53:51.205763 1189833 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 21:53:51.205772 1189833 kubeadm.go:318] 
	I1002 21:53:51.205819 1189833 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 21:53:51.205826 1189833 kubeadm.go:318] 
	I1002 21:53:51.205878 1189833 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 21:53:51.205956 1189833 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:53:51.206027 1189833 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:53:51.206052 1189833 kubeadm.go:318] 
	I1002 21:53:51.206137 1189833 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:53:51.206218 1189833 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 21:53:51.206226 1189833 kubeadm.go:318] 
	I1002 21:53:51.206309 1189833 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 21plum.6l6cs3s9kwcorv4m \
	I1002 21:53:51.206415 1189833 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 \
	I1002 21:53:51.206439 1189833 kubeadm.go:318] 	--control-plane 
	I1002 21:53:51.206448 1189833 kubeadm.go:318] 
	I1002 21:53:51.206532 1189833 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:53:51.206541 1189833 kubeadm.go:318] 
	I1002 21:53:51.206629 1189833 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 21plum.6l6cs3s9kwcorv4m \
	I1002 21:53:51.206735 1189833 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 
	I1002 21:53:51.215083 1189833 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:53:51.215325 1189833 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:53:51.215466 1189833 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:53:51.215488 1189833 cni.go:84] Creating CNI manager for ""
	I1002 21:53:51.215496 1189833 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:53:51.218899 1189833 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 21:53:52.178341 1192467 node_ready.go:49] node "no-preload-661954" is "Ready"
	I1002 21:53:52.178366 1192467 node_ready.go:38] duration metric: took 6.217135309s for node "no-preload-661954" to be "Ready" ...
	I1002 21:53:52.178381 1192467 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:53:52.178441 1192467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:53:52.585784 1192467 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.719776673s)
	I1002 21:53:54.584206 1192467 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.699121666s)
	I1002 21:53:54.614118 1192467 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.150352846s)
	I1002 21:53:54.614340 1192467 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.435886189s)
	I1002 21:53:54.614380 1192467 api_server.go:72] duration metric: took 9.441950505s to wait for apiserver process to appear ...
	I1002 21:53:54.614401 1192467 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:53:54.614430 1192467 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:53:54.617022 1192467 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-661954 addons enable metrics-server
	
	I1002 21:53:54.620054 1192467 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1002 21:53:54.622900 1192467 addons.go:514] duration metric: took 9.450101363s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1002 21:53:54.630627 1192467 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:53:54.630697 1192467 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:53:51.221764 1189833 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:53:51.226028 1189833 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 21:53:51.226066 1189833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 21:53:51.254968 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:53:51.939290 1189833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:53:51.939413 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:51.939489 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-132977 minikube.k8s.io/updated_at=2025_10_02T21_53_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b minikube.k8s.io/name=embed-certs-132977 minikube.k8s.io/primary=true
	I1002 21:53:52.314456 1189833 ops.go:34] apiserver oom_adj: -16
	I1002 21:53:52.314561 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:52.815637 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:53.315233 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:53.814746 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:54.314889 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:54.814670 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:55.315642 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:55.815157 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:56.315270 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:56.433779 1189833 kubeadm.go:1113] duration metric: took 4.494413139s to wait for elevateKubeSystemPrivileges
	I1002 21:53:56.433817 1189833 kubeadm.go:402] duration metric: took 27.86764968s to StartCluster
	I1002 21:53:56.433835 1189833 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:56.433900 1189833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:53:56.435285 1189833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:56.435511 1189833 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:53:56.435638 1189833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 21:53:56.435896 1189833 config.go:182] Loaded profile config "embed-certs-132977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:53:56.435933 1189833 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:53:56.435991 1189833 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-132977"
	I1002 21:53:56.436007 1189833 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-132977"
	I1002 21:53:56.436027 1189833 host.go:66] Checking if "embed-certs-132977" exists ...
	I1002 21:53:56.436540 1189833 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:53:56.436923 1189833 addons.go:69] Setting default-storageclass=true in profile "embed-certs-132977"
	I1002 21:53:56.436947 1189833 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-132977"
	I1002 21:53:56.437223 1189833 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:53:56.439647 1189833 out.go:179] * Verifying Kubernetes components...
	I1002 21:53:56.443344 1189833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:53:56.476650 1189833 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:53:56.478543 1189833 addons.go:238] Setting addon default-storageclass=true in "embed-certs-132977"
	I1002 21:53:56.478584 1189833 host.go:66] Checking if "embed-certs-132977" exists ...
	I1002 21:53:56.479128 1189833 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:53:56.479690 1189833 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:53:56.479712 1189833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:53:56.479769 1189833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:53:56.524116 1189833 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:53:56.524137 1189833 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:53:56.524204 1189833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:53:56.524618 1189833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34196 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:53:56.565467 1189833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34196 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:53:56.834821 1189833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 21:53:56.927271 1189833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:53:56.955009 1189833 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:53:56.973420 1189833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:53:57.566058 1189833 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1002 21:53:57.900359 1189833 node_ready.go:35] waiting up to 6m0s for node "embed-certs-132977" to be "Ready" ...
	I1002 21:53:57.915026 1189833 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 21:53:55.114484 1192467 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:53:55.123286 1192467 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 21:53:55.124661 1192467 api_server.go:141] control plane version: v1.34.1
	I1002 21:53:55.124693 1192467 api_server.go:131] duration metric: took 510.273967ms to wait for apiserver health ...
	I1002 21:53:55.124703 1192467 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:53:55.128660 1192467 system_pods.go:59] 8 kube-system pods found
	I1002 21:53:55.128703 1192467 system_pods.go:61] "coredns-66bc5c9577-ddsr2" [af6a1936-ec5c-4c31-9f22-73cc6f7042c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:53:55.128760 1192467 system_pods.go:61] "etcd-no-preload-661954" [496cecf9-be4d-453d-aee5-11ec78563118] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:53:55.128777 1192467 system_pods.go:61] "kindnet-flmgm" [bb04d59b-0f2b-44db-bbd1-53d35a0d1406] Running
	I1002 21:53:55.128794 1192467 system_pods.go:61] "kube-apiserver-no-preload-661954" [73de157f-6464-4c1a-8de1-87a4fff66c68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:53:55.128828 1192467 system_pods.go:61] "kube-controller-manager-no-preload-661954" [eda68d1f-9389-46d7-88c2-3f2017805cf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:53:55.128841 1192467 system_pods.go:61] "kube-proxy-5jstv" [51774e0f-371e-4a31-801f-9ca681eefe74] Running
	I1002 21:53:55.128879 1192467 system_pods.go:61] "kube-scheduler-no-preload-661954" [2d738e6b-41aa-46db-b8a7-3d4a2e34487a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:53:55.128911 1192467 system_pods.go:61] "storage-provisioner" [862f39f9-ad9b-4268-86f6-775e9221224b] Running
	I1002 21:53:55.128919 1192467 system_pods.go:74] duration metric: took 4.210506ms to wait for pod list to return data ...
	I1002 21:53:55.128954 1192467 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:53:55.132297 1192467 default_sa.go:45] found service account: "default"
	I1002 21:53:55.132328 1192467 default_sa.go:55] duration metric: took 3.360478ms for default service account to be created ...
	I1002 21:53:55.132341 1192467 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:53:55.136969 1192467 system_pods.go:86] 8 kube-system pods found
	I1002 21:53:55.137010 1192467 system_pods.go:89] "coredns-66bc5c9577-ddsr2" [af6a1936-ec5c-4c31-9f22-73cc6f7042c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:53:55.137026 1192467 system_pods.go:89] "etcd-no-preload-661954" [496cecf9-be4d-453d-aee5-11ec78563118] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:53:55.137034 1192467 system_pods.go:89] "kindnet-flmgm" [bb04d59b-0f2b-44db-bbd1-53d35a0d1406] Running
	I1002 21:53:55.137042 1192467 system_pods.go:89] "kube-apiserver-no-preload-661954" [73de157f-6464-4c1a-8de1-87a4fff66c68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:53:55.137053 1192467 system_pods.go:89] "kube-controller-manager-no-preload-661954" [eda68d1f-9389-46d7-88c2-3f2017805cf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:53:55.137062 1192467 system_pods.go:89] "kube-proxy-5jstv" [51774e0f-371e-4a31-801f-9ca681eefe74] Running
	I1002 21:53:55.137069 1192467 system_pods.go:89] "kube-scheduler-no-preload-661954" [2d738e6b-41aa-46db-b8a7-3d4a2e34487a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:53:55.137078 1192467 system_pods.go:89] "storage-provisioner" [862f39f9-ad9b-4268-86f6-775e9221224b] Running
	I1002 21:53:55.137087 1192467 system_pods.go:126] duration metric: took 4.740634ms to wait for k8s-apps to be running ...
	I1002 21:53:55.137100 1192467 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:53:55.137170 1192467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:53:55.158776 1192467 system_svc.go:56] duration metric: took 21.666236ms WaitForService to wait for kubelet
	I1002 21:53:55.158878 1192467 kubeadm.go:586] duration metric: took 9.986436313s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:53:55.158941 1192467 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:53:55.162488 1192467 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:53:55.162583 1192467 node_conditions.go:123] node cpu capacity is 2
	I1002 21:53:55.162612 1192467 node_conditions.go:105] duration metric: took 3.648453ms to run NodePressure ...
	I1002 21:53:55.162651 1192467 start.go:242] waiting for startup goroutines ...
	I1002 21:53:55.162679 1192467 start.go:247] waiting for cluster config update ...
	I1002 21:53:55.162704 1192467 start.go:256] writing updated cluster config ...
	I1002 21:53:55.163077 1192467 ssh_runner.go:195] Run: rm -f paused
	I1002 21:53:55.167590 1192467 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:53:55.171504 1192467 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ddsr2" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 21:53:57.180373 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:53:59.678019 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	I1002 21:53:57.918188 1189833 addons.go:514] duration metric: took 1.482248421s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 21:53:58.071227 1189833 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-132977" context rescaled to 1 replicas
	W1002 21:53:59.905322 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:01.681466 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:04.177105 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:02.403445 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:04.405558 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:06.179313 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:08.678604 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:06.903317 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:09.403229 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:11.176586 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:13.677184 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:11.403384 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:13.903569 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:15.678985 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:18.177426 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:15.904067 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:18.403552 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:20.678796 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:23.177446 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:20.903769 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:22.903999 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:24.904140 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:25.178291 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:27.677592 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:26.904328 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:29.403133 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:30.177272 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:32.677201 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	I1002 21:54:33.176891 1192467 pod_ready.go:94] pod "coredns-66bc5c9577-ddsr2" is "Ready"
	I1002 21:54:33.176922 1192467 pod_ready.go:86] duration metric: took 38.005343021s for pod "coredns-66bc5c9577-ddsr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.179511 1192467 pod_ready.go:83] waiting for pod "etcd-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.184943 1192467 pod_ready.go:94] pod "etcd-no-preload-661954" is "Ready"
	I1002 21:54:33.184972 1192467 pod_ready.go:86] duration metric: took 5.432776ms for pod "etcd-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.187710 1192467 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.196382 1192467 pod_ready.go:94] pod "kube-apiserver-no-preload-661954" is "Ready"
	I1002 21:54:33.196413 1192467 pod_ready.go:86] duration metric: took 8.671641ms for pod "kube-apiserver-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.198899 1192467 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.375336 1192467 pod_ready.go:94] pod "kube-controller-manager-no-preload-661954" is "Ready"
	I1002 21:54:33.375367 1192467 pod_ready.go:86] duration metric: took 176.436003ms for pod "kube-controller-manager-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.575930 1192467 pod_ready.go:83] waiting for pod "kube-proxy-5jstv" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.975363 1192467 pod_ready.go:94] pod "kube-proxy-5jstv" is "Ready"
	I1002 21:54:33.975393 1192467 pod_ready.go:86] duration metric: took 399.437804ms for pod "kube-proxy-5jstv" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:34.180551 1192467 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:34.575430 1192467 pod_ready.go:94] pod "kube-scheduler-no-preload-661954" is "Ready"
	I1002 21:54:34.575460 1192467 pod_ready.go:86] duration metric: took 394.885383ms for pod "kube-scheduler-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:34.575481 1192467 pod_ready.go:40] duration metric: took 39.407775252s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:54:34.631486 1192467 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:54:34.634421 1192467 out.go:179] * Done! kubectl is now configured to use "no-preload-661954" cluster and "default" namespace by default
	W1002 21:54:31.403472 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:33.903536 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:35.903696 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:37.903870 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	I1002 21:54:38.404715 1189833 node_ready.go:49] node "embed-certs-132977" is "Ready"
	I1002 21:54:38.404740 1189833 node_ready.go:38] duration metric: took 40.504339879s for node "embed-certs-132977" to be "Ready" ...
	I1002 21:54:38.404753 1189833 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:54:38.404814 1189833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:54:38.433291 1189833 api_server.go:72] duration metric: took 41.997752118s to wait for apiserver process to appear ...
	I1002 21:54:38.433313 1189833 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:54:38.433332 1189833 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:54:38.445543 1189833 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 21:54:38.446830 1189833 api_server.go:141] control plane version: v1.34.1
	I1002 21:54:38.446852 1189833 api_server.go:131] duration metric: took 13.531475ms to wait for apiserver health ...
	I1002 21:54:38.446860 1189833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:54:38.450727 1189833 system_pods.go:59] 8 kube-system pods found
	I1002 21:54:38.450758 1189833 system_pods.go:61] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:54:38.450765 1189833 system_pods.go:61] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running
	I1002 21:54:38.450771 1189833 system_pods.go:61] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:54:38.450775 1189833 system_pods.go:61] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running
	I1002 21:54:38.450784 1189833 system_pods.go:61] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running
	I1002 21:54:38.450789 1189833 system_pods.go:61] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:54:38.450793 1189833 system_pods.go:61] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running
	I1002 21:54:38.450799 1189833 system_pods.go:61] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:54:38.450805 1189833 system_pods.go:74] duration metric: took 3.939416ms to wait for pod list to return data ...
	I1002 21:54:38.450813 1189833 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:54:38.454525 1189833 default_sa.go:45] found service account: "default"
	I1002 21:54:38.454544 1189833 default_sa.go:55] duration metric: took 3.725851ms for default service account to be created ...
	I1002 21:54:38.454554 1189833 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:54:38.457911 1189833 system_pods.go:86] 8 kube-system pods found
	I1002 21:54:38.457941 1189833 system_pods.go:89] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:54:38.457949 1189833 system_pods.go:89] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running
	I1002 21:54:38.457955 1189833 system_pods.go:89] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:54:38.457959 1189833 system_pods.go:89] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running
	I1002 21:54:38.457964 1189833 system_pods.go:89] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running
	I1002 21:54:38.457968 1189833 system_pods.go:89] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:54:38.457971 1189833 system_pods.go:89] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running
	I1002 21:54:38.457977 1189833 system_pods.go:89] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:54:38.457997 1189833 retry.go:31] will retry after 282.68274ms: missing components: kube-dns
	I1002 21:54:38.745579 1189833 system_pods.go:86] 8 kube-system pods found
	I1002 21:54:38.745667 1189833 system_pods.go:89] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:54:38.745700 1189833 system_pods.go:89] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running
	I1002 21:54:38.745714 1189833 system_pods.go:89] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:54:38.745720 1189833 system_pods.go:89] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running
	I1002 21:54:38.745725 1189833 system_pods.go:89] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running
	I1002 21:54:38.745730 1189833 system_pods.go:89] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:54:38.745734 1189833 system_pods.go:89] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running
	I1002 21:54:38.745740 1189833 system_pods.go:89] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:54:38.745759 1189833 retry.go:31] will retry after 289.646816ms: missing components: kube-dns
	I1002 21:54:39.039529 1189833 system_pods.go:86] 8 kube-system pods found
	I1002 21:54:39.039556 1189833 system_pods.go:89] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:54:39.039562 1189833 system_pods.go:89] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running
	I1002 21:54:39.039569 1189833 system_pods.go:89] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:54:39.039573 1189833 system_pods.go:89] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running
	I1002 21:54:39.039578 1189833 system_pods.go:89] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running
	I1002 21:54:39.039581 1189833 system_pods.go:89] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:54:39.039585 1189833 system_pods.go:89] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running
	I1002 21:54:39.039591 1189833 system_pods.go:89] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:54:39.039605 1189833 retry.go:31] will retry after 417.217485ms: missing components: kube-dns
	I1002 21:54:39.461452 1189833 system_pods.go:86] 8 kube-system pods found
	I1002 21:54:39.461501 1189833 system_pods.go:89] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Running
	I1002 21:54:39.461509 1189833 system_pods.go:89] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running
	I1002 21:54:39.461513 1189833 system_pods.go:89] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:54:39.461518 1189833 system_pods.go:89] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running
	I1002 21:54:39.461541 1189833 system_pods.go:89] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running
	I1002 21:54:39.461554 1189833 system_pods.go:89] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:54:39.461573 1189833 system_pods.go:89] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running
	I1002 21:54:39.461584 1189833 system_pods.go:89] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Running
	I1002 21:54:39.461594 1189833 system_pods.go:126] duration metric: took 1.007033707s to wait for k8s-apps to be running ...
	I1002 21:54:39.461604 1189833 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:54:39.461671 1189833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:54:39.478598 1189833 system_svc.go:56] duration metric: took 16.985989ms WaitForService to wait for kubelet
	I1002 21:54:39.478670 1189833 kubeadm.go:586] duration metric: took 43.043135125s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:54:39.478704 1189833 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:54:39.482160 1189833 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:54:39.482195 1189833 node_conditions.go:123] node cpu capacity is 2
	I1002 21:54:39.482210 1189833 node_conditions.go:105] duration metric: took 3.499272ms to run NodePressure ...
	I1002 21:54:39.482223 1189833 start.go:242] waiting for startup goroutines ...
	I1002 21:54:39.482230 1189833 start.go:247] waiting for cluster config update ...
	I1002 21:54:39.482242 1189833 start.go:256] writing updated cluster config ...
	I1002 21:54:39.482538 1189833 ssh_runner.go:195] Run: rm -f paused
	I1002 21:54:39.486611 1189833 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:54:39.561128 1189833 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rl5vq" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.566358 1189833 pod_ready.go:94] pod "coredns-66bc5c9577-rl5vq" is "Ready"
	I1002 21:54:39.566389 1189833 pod_ready.go:86] duration metric: took 5.230919ms for pod "coredns-66bc5c9577-rl5vq" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.568792 1189833 pod_ready.go:83] waiting for pod "etcd-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.573727 1189833 pod_ready.go:94] pod "etcd-embed-certs-132977" is "Ready"
	I1002 21:54:39.573755 1189833 pod_ready.go:86] duration metric: took 4.934738ms for pod "etcd-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.576177 1189833 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.580896 1189833 pod_ready.go:94] pod "kube-apiserver-embed-certs-132977" is "Ready"
	I1002 21:54:39.580922 1189833 pod_ready.go:86] duration metric: took 4.714781ms for pod "kube-apiserver-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.583217 1189833 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.892636 1189833 pod_ready.go:94] pod "kube-controller-manager-embed-certs-132977" is "Ready"
	I1002 21:54:39.892665 1189833 pod_ready.go:86] duration metric: took 309.422099ms for pod "kube-controller-manager-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:40.096570 1189833 pod_ready.go:83] waiting for pod "kube-proxy-rslfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:40.492183 1189833 pod_ready.go:94] pod "kube-proxy-rslfw" is "Ready"
	I1002 21:54:40.492212 1189833 pod_ready.go:86] duration metric: took 395.615555ms for pod "kube-proxy-rslfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:40.692648 1189833 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:41.092952 1189833 pod_ready.go:94] pod "kube-scheduler-embed-certs-132977" is "Ready"
	I1002 21:54:41.092979 1189833 pod_ready.go:86] duration metric: took 400.302152ms for pod "kube-scheduler-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:41.092991 1189833 pod_ready.go:40] duration metric: took 1.606349041s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:54:41.150287 1189833 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:54:41.156763 1189833 out.go:179] * Done! kubectl is now configured to use "embed-certs-132977" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 21:54:28 no-preload-661954 crio[647]: time="2025-10-02T21:54:28.87296793Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:54:28 no-preload-661954 crio[647]: time="2025-10-02T21:54:28.883796371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:54:28 no-preload-661954 crio[647]: time="2025-10-02T21:54:28.884331987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:54:28 no-preload-661954 crio[647]: time="2025-10-02T21:54:28.899401063Z" level=info msg="Created container 859aca45b87df60af8c0060e3cca60e3bd6f08c498629d13156197b0ebed33c1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc/dashboard-metrics-scraper" id=76f3d10d-3f42-4d32-b1c6-5dadb86f826e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:54:28 no-preload-661954 crio[647]: time="2025-10-02T21:54:28.900100024Z" level=info msg="Starting container: 859aca45b87df60af8c0060e3cca60e3bd6f08c498629d13156197b0ebed33c1" id=46e36e6b-dc4e-4558-a217-3f376742e0cf name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:54:28 no-preload-661954 conmon[1626]: conmon 859aca45b87df60af8c0 <ninfo>: container 1628 exited with status 1
	Oct 02 21:54:28 no-preload-661954 crio[647]: time="2025-10-02T21:54:28.906412785Z" level=info msg="Started container" PID=1628 containerID=859aca45b87df60af8c0060e3cca60e3bd6f08c498629d13156197b0ebed33c1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc/dashboard-metrics-scraper id=46e36e6b-dc4e-4558-a217-3f376742e0cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=b489017ce62851f397774df3e66e0024acb17f6620638b576384427bbfc11ede
	Oct 02 21:54:29 no-preload-661954 crio[647]: time="2025-10-02T21:54:29.199455539Z" level=info msg="Removing container: 2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85" id=1398e49a-0b9b-47f7-8df1-b58cbf53bfe7 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 21:54:29 no-preload-661954 crio[647]: time="2025-10-02T21:54:29.206322369Z" level=info msg="Error loading conmon cgroup of container 2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85: cgroup deleted" id=1398e49a-0b9b-47f7-8df1-b58cbf53bfe7 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 21:54:29 no-preload-661954 crio[647]: time="2025-10-02T21:54:29.209119949Z" level=info msg="Removed container 2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc/dashboard-metrics-scraper" id=1398e49a-0b9b-47f7-8df1-b58cbf53bfe7 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.713412716Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.721033569Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.721067554Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.721091086Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.724147571Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.724180998Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.724206606Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.727310614Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.72734519Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.727367688Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.73013248Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.730163511Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.730188044Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.733076272Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:54:33 no-preload-661954 crio[647]: time="2025-10-02T21:54:33.73310878Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	859aca45b87df       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   b489017ce6285       dashboard-metrics-scraper-6ffb444bf9-fb9gc   kubernetes-dashboard
	032a3b41ed0c4       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           26 seconds ago       Running             storage-provisioner         2                   0d90154fe652d       storage-provisioner                          kube-system
	7e7d0b2884e0b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   49 seconds ago       Running             kubernetes-dashboard        0                   d5b420ffa7191       kubernetes-dashboard-855c9754f9-mmbrz        kubernetes-dashboard
	f633fcfe67ab1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   5201dea46c850       busybox                                      default
	4e0dc14637932       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           58 seconds ago       Exited              storage-provisioner         1                   0d90154fe652d       storage-provisioner                          kube-system
	6b82861a8945a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   11fd8d911b948       coredns-66bc5c9577-ddsr2                     kube-system
	9e3cec57132b7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   e76d327c93aef       kindnet-flmgm                                kube-system
	a6ab31d1759e6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   e8aa4a619b440       kube-proxy-5jstv                             kube-system
	3cf04b502d36e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   5796b45e7007a       kube-apiserver-no-preload-661954             kube-system
	c31f86dc038a7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   58ee2f7b0e671       etcd-no-preload-661954                       kube-system
	88076a11fa43f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   2da8a0fde382d       kube-controller-manager-no-preload-661954    kube-system
	5cd95915db618       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   90b8b0bff4f83       kube-scheduler-no-preload-661954             kube-system
	
	
	==> coredns [6b82861a8945a6d58ec459cbce94b85d54a3a5234cc6ba7d3d096a78eb01fdee] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55879 - 14096 "HINFO IN 1947180028123946014.2018825497004255907. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003846824s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-661954
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-661954
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=no-preload-661954
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_52_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:52:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-661954
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:54:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:54:22 +0000   Thu, 02 Oct 2025 21:52:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:54:22 +0000   Thu, 02 Oct 2025 21:52:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:54:22 +0000   Thu, 02 Oct 2025 21:52:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:54:22 +0000   Thu, 02 Oct 2025 21:53:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-661954
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c032b084c984865bf1543fa4546a69b
	  System UUID:                a884495e-b86e-4c01-a759-33d7d494f01d
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-66bc5c9577-ddsr2                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m
	  kube-system                 etcd-no-preload-661954                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-flmgm                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m
	  kube-system                 kube-apiserver-no-preload-661954              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-no-preload-661954     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-5jstv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-scheduler-no-preload-661954              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fb9gc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mmbrz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 117s               kube-proxy       
	  Normal   Starting                 57s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m5s               kubelet          Node no-preload-661954 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m5s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m5s               kubelet          Node no-preload-661954 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m5s               kubelet          Node no-preload-661954 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m5s               kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m1s               node-controller  Node no-preload-661954 event: Registered Node no-preload-661954 in Controller
	  Normal   NodeReady                106s               kubelet          Node no-preload-661954 status is now: NodeReady
	  Normal   Starting                 69s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x9 over 69s)  kubelet          Node no-preload-661954 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node no-preload-661954 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x7 over 69s)  kubelet          Node no-preload-661954 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node no-preload-661954 event: Registered Node no-preload-661954 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:18] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:21] overlayfs: idmapped layers are currently not supported
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +39.734560] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[  +6.128952] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[  +5.098616] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c31f86dc038a7a8cda44b794e533fc5838b3adc20907a05388b6afdffa5ec270] <==
	{"level":"warn","ts":"2025-10-02T21:53:49.146566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.170244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.211489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.257881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.285531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.312754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.344099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.373238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.434458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.542820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.544659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.564582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.604901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.638463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.684056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.702243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.728483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.762640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.805396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.882418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.928339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.959326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:49.990268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:50.032336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:50.196252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45910","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:54:52 up  6:37,  0 user,  load average: 4.29, 3.15, 2.11
	Linux no-preload-661954 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9e3cec57132b725065e74b658befc5de805ca717fa3dc565174c378bb7fcc9c5] <==
	I1002 21:53:53.510629       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:53:53.511077       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 21:53:53.511252       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:53:53.511289       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:53:53.511300       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:53:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:53:53.728423       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:53:53.728448       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:53:53.728459       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:53:53.728573       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:54:23.714148       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:54:23.714411       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 21:54:23.714634       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 21:54:23.728004       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 21:54:25.229657       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:54:25.229752       1 metrics.go:72] Registering metrics
	I1002 21:54:25.229847       1 controller.go:711] "Syncing nftables rules"
	I1002 21:54:33.713068       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:54:33.713142       1 main.go:301] handling current node
	I1002 21:54:43.717388       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:54:43.717437       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3cf04b502d36e04bbf37deadf1e0009a85fad1306e14b2a672afa8633ea20e98] <==
	I1002 21:53:52.276272       1 policy_source.go:240] refreshing policies
	I1002 21:53:52.290460       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 21:53:52.290483       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 21:53:52.303578       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:53:52.305058       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 21:53:52.305121       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 21:53:52.307244       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 21:53:52.319991       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:53:52.332030       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 21:53:52.332089       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:53:52.358106       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 21:53:52.358141       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 21:53:52.368139       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1002 21:53:52.414271       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 21:53:52.785857       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:53:52.898435       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:53:54.008251       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 21:53:54.158228       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:53:54.253283       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:53:54.282538       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:53:54.581841       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.26.45"}
	I1002 21:53:54.606895       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.108.131"}
	I1002 21:53:56.220589       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 21:53:56.661970       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:53:56.807242       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [88076a11fa43f4493d4db1e7eaa62ad0cda2f161d8d48715aea30b33259edee2] <==
	I1002 21:53:56.250872       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 21:53:56.251451       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 21:53:56.251756       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 21:53:56.252000       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 21:53:56.252033       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 21:53:56.255833       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 21:53:56.266671       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 21:53:56.268025       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:53:56.273292       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 21:53:56.275732       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 21:53:56.282261       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 21:53:56.282443       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 21:53:56.282493       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 21:53:56.282522       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 21:53:56.282549       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 21:53:56.291574       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 21:53:56.296944       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:53:56.300468       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 21:53:56.300889       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 21:53:56.301004       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-661954"
	I1002 21:53:56.301080       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 21:53:56.300565       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 21:53:56.312438       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:53:56.312505       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:53:56.312536       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [a6ab31d1759e69dc797d55b97650619bcf6b2ffed03ceade3ad78af7a9ef9788] <==
	I1002 21:53:54.574541       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:53:54.679484       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:53:54.780072       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:53:54.780107       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 21:53:54.780192       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:53:54.813063       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:53:54.813133       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:53:54.824523       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:53:54.826013       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:53:54.826061       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:53:54.831665       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:53:54.831689       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:53:54.832016       1 config.go:200] "Starting service config controller"
	I1002 21:53:54.832032       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:53:54.832337       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:53:54.832351       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:53:54.835447       1 config.go:309] "Starting node config controller"
	I1002 21:53:54.835972       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:53:54.836336       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:53:54.933822       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 21:53:54.937257       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:53:54.937331       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5cd95915db6182c9baece2f44a4bb2de93f9d4189ac588470b8045a3fc361f08] <==
	I1002 21:53:49.306993       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:53:54.286646       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:53:54.286748       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:53:54.312850       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 21:53:54.316723       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 21:53:54.316669       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:53:54.317557       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:53:54.316697       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:53:54.317934       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:53:54.318936       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:53:54.329191       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:53:54.417711       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:53:54.417795       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 21:53:54.418558       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 02 21:53:57 no-preload-661954 kubelet[768]: I1002 21:53:57.006298     768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/681a294b-e922-4417-b18c-432c106b166b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fb9gc\" (UID: \"681a294b-e922-4417-b18c-432c106b166b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc"
	Oct 02 21:53:57 no-preload-661954 kubelet[768]: I1002 21:53:57.006366     768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjqdz\" (UniqueName: \"kubernetes.io/projected/5828d24d-1b7f-4b37-8eda-0cb1ec554c80-kube-api-access-kjqdz\") pod \"kubernetes-dashboard-855c9754f9-mmbrz\" (UID: \"5828d24d-1b7f-4b37-8eda-0cb1ec554c80\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mmbrz"
	Oct 02 21:53:57 no-preload-661954 kubelet[768]: I1002 21:53:57.006390     768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8m7w\" (UniqueName: \"kubernetes.io/projected/681a294b-e922-4417-b18c-432c106b166b-kube-api-access-g8m7w\") pod \"dashboard-metrics-scraper-6ffb444bf9-fb9gc\" (UID: \"681a294b-e922-4417-b18c-432c106b166b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc"
	Oct 02 21:53:57 no-preload-661954 kubelet[768]: I1002 21:53:57.006414     768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5828d24d-1b7f-4b37-8eda-0cb1ec554c80-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-mmbrz\" (UID: \"5828d24d-1b7f-4b37-8eda-0cb1ec554c80\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mmbrz"
	Oct 02 21:53:57 no-preload-661954 kubelet[768]: W1002 21:53:57.477839     768 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f3d77867568460118a6e46c34d02d8731a8c1d1b9fefbc0ed3de33719ef38135/crio-d5b420ffa7191ecdae9148603a68d94088d914d503339c59451a706b233a2b96 WatchSource:0}: Error finding container d5b420ffa7191ecdae9148603a68d94088d914d503339c59451a706b233a2b96: Status 404 returned error can't find the container with id d5b420ffa7191ecdae9148603a68d94088d914d503339c59451a706b233a2b96
	Oct 02 21:54:03 no-preload-661954 kubelet[768]: I1002 21:54:03.020582     768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 21:54:03 no-preload-661954 kubelet[768]: I1002 21:54:03.500642     768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mmbrz" podStartSLOduration=2.158877145 podStartE2EDuration="7.500622314s" podCreationTimestamp="2025-10-02 21:53:56 +0000 UTC" firstStartedPulling="2025-10-02 21:53:57.490232127 +0000 UTC m=+13.960224190" lastFinishedPulling="2025-10-02 21:54:02.831977206 +0000 UTC m=+19.301969359" observedRunningTime="2025-10-02 21:54:03.140416466 +0000 UTC m=+19.610408537" watchObservedRunningTime="2025-10-02 21:54:03.500622314 +0000 UTC m=+19.970614385"
	Oct 02 21:54:08 no-preload-661954 kubelet[768]: I1002 21:54:08.137727     768 scope.go:117] "RemoveContainer" containerID="58151bc543a9802af4bc7fc73cc143b6f28db9beb577f14c1abfc9b46ae10186"
	Oct 02 21:54:09 no-preload-661954 kubelet[768]: I1002 21:54:09.142897     768 scope.go:117] "RemoveContainer" containerID="58151bc543a9802af4bc7fc73cc143b6f28db9beb577f14c1abfc9b46ae10186"
	Oct 02 21:54:09 no-preload-661954 kubelet[768]: I1002 21:54:09.143511     768 scope.go:117] "RemoveContainer" containerID="2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85"
	Oct 02 21:54:09 no-preload-661954 kubelet[768]: E1002 21:54:09.143702     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb9gc_kubernetes-dashboard(681a294b-e922-4417-b18c-432c106b166b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc" podUID="681a294b-e922-4417-b18c-432c106b166b"
	Oct 02 21:54:10 no-preload-661954 kubelet[768]: I1002 21:54:10.147058     768 scope.go:117] "RemoveContainer" containerID="2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85"
	Oct 02 21:54:10 no-preload-661954 kubelet[768]: E1002 21:54:10.147217     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb9gc_kubernetes-dashboard(681a294b-e922-4417-b18c-432c106b166b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc" podUID="681a294b-e922-4417-b18c-432c106b166b"
	Oct 02 21:54:17 no-preload-661954 kubelet[768]: I1002 21:54:17.425685     768 scope.go:117] "RemoveContainer" containerID="2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85"
	Oct 02 21:54:17 no-preload-661954 kubelet[768]: E1002 21:54:17.425890     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb9gc_kubernetes-dashboard(681a294b-e922-4417-b18c-432c106b166b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc" podUID="681a294b-e922-4417-b18c-432c106b166b"
	Oct 02 21:54:25 no-preload-661954 kubelet[768]: I1002 21:54:25.184624     768 scope.go:117] "RemoveContainer" containerID="4e0dc1463793285417d2b9579a95d8747376988c14fdab1646a7485e46504495"
	Oct 02 21:54:28 no-preload-661954 kubelet[768]: I1002 21:54:28.869945     768 scope.go:117] "RemoveContainer" containerID="2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85"
	Oct 02 21:54:29 no-preload-661954 kubelet[768]: I1002 21:54:29.198278     768 scope.go:117] "RemoveContainer" containerID="2740e36840bbef980eaa4f05eeee742305e7afe73edf19822adf8cb06d0bed85"
	Oct 02 21:54:30 no-preload-661954 kubelet[768]: I1002 21:54:30.201862     768 scope.go:117] "RemoveContainer" containerID="859aca45b87df60af8c0060e3cca60e3bd6f08c498629d13156197b0ebed33c1"
	Oct 02 21:54:30 no-preload-661954 kubelet[768]: E1002 21:54:30.202025     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb9gc_kubernetes-dashboard(681a294b-e922-4417-b18c-432c106b166b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc" podUID="681a294b-e922-4417-b18c-432c106b166b"
	Oct 02 21:54:37 no-preload-661954 kubelet[768]: I1002 21:54:37.426088     768 scope.go:117] "RemoveContainer" containerID="859aca45b87df60af8c0060e3cca60e3bd6f08c498629d13156197b0ebed33c1"
	Oct 02 21:54:37 no-preload-661954 kubelet[768]: E1002 21:54:37.426285     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb9gc_kubernetes-dashboard(681a294b-e922-4417-b18c-432c106b166b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb9gc" podUID="681a294b-e922-4417-b18c-432c106b166b"
	Oct 02 21:54:46 no-preload-661954 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 21:54:46 no-preload-661954 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 21:54:46 no-preload-661954 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [7e7d0b2884e0b1793d47f40952ea30e017bf6f19ac40af2b25670184e8f23167] <==
	2025/10/02 21:54:02 Starting overwatch
	2025/10/02 21:54:02 Using namespace: kubernetes-dashboard
	2025/10/02 21:54:02 Using in-cluster config to connect to apiserver
	2025/10/02 21:54:02 Using secret token for csrf signing
	2025/10/02 21:54:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 21:54:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 21:54:02 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 21:54:02 Generating JWE encryption key
	2025/10/02 21:54:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 21:54:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 21:54:03 Initializing JWE encryption key from synchronized object
	2025/10/02 21:54:03 Creating in-cluster Sidecar client
	2025/10/02 21:54:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 21:54:03 Serving insecurely on HTTP port: 9090
	2025/10/02 21:54:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [032a3b41ed0c467e55383a336d6fd7f6f244fd085545de3a0e761d76b74d86f8] <==
	I1002 21:54:25.266024       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 21:54:25.267919       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 21:54:25.272281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:28.727059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:32.987551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:36.586163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:39.639389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:42.661725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:42.669171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:54:42.669335       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:54:42.669519       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-661954_38ec46aa-f74b-4899-bb1b-a8dcf6f30298!
	I1002 21:54:42.670230       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"93c0aec5-6a68-4ee7-97c4-954139f85db0", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-661954_38ec46aa-f74b-4899-bb1b-a8dcf6f30298 became leader
	W1002 21:54:42.672213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:42.679195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:54:42.770579       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-661954_38ec46aa-f74b-4899-bb1b-a8dcf6f30298!
	W1002 21:54:44.682024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:44.689134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:46.692684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:46.701225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:48.705704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:48.714309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:50.717804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:50.730081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:52.733795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:52.744643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [4e0dc1463793285417d2b9579a95d8747376988c14fdab1646a7485e46504495] <==
	I1002 21:53:54.194016       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 21:54:24.215739       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-661954 -n no-preload-661954
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-661954 -n no-preload-661954: exit status 2 (530.373003ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-661954 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-132977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-132977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (364.759417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:54:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-132977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-132977 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-132977 describe deploy/metrics-server -n kube-system: exit status 1 (119.235533ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-132977 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-132977
helpers_test.go:243: (dbg) docker inspect embed-certs-132977:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7",
	        "Created": "2025-10-02T21:53:21.268918022Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1190551,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:53:21.334683583Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/hosts",
	        "LogPath": "/var/lib/docker/containers/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7-json.log",
	        "Name": "/embed-certs-132977",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-132977:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-132977",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7",
	                "LowerDir": "/var/lib/docker/overlay2/f2e833cdde4a6810938f6c941611af7faba9c713dcad91e7a73f1622564784a1-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2e833cdde4a6810938f6c941611af7faba9c713dcad91e7a73f1622564784a1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2e833cdde4a6810938f6c941611af7faba9c713dcad91e7a73f1622564784a1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2e833cdde4a6810938f6c941611af7faba9c713dcad91e7a73f1622564784a1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-132977",
	                "Source": "/var/lib/docker/volumes/embed-certs-132977/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-132977",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-132977",
	                "name.minikube.sigs.k8s.io": "embed-certs-132977",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "710d185eb068e1965708b9b6a499d2d7be111a5e6088f0ff310ebcdc3d20cf90",
	            "SandboxKey": "/var/run/docker/netns/710d185eb068",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34196"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34197"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34200"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34198"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34199"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-132977": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:7c:54:51:de:fd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "09517ef0cb9cfbc2b4218dc316c6d2b554ca0576a9445b01545284a1bf270966",
	                    "EndpointID": "9c581f9e55f03b64c65f1baaee171adc026aba7ab2baf27e21cb685c8ddcde90",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-132977",
	                        "3425438903cf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-132977 -n embed-certs-132977
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-132977 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-132977 logs -n 25: (1.632406782s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ delete  │ -p force-systemd-env-916563                                                                                                                                                                                                                   │ force-systemd-env-916563 │ jenkins │ v1.37.0 │ 02 Oct 25 21:49 UTC │ 02 Oct 25 21:49 UTC │
	│ start   │ -p cert-options-769461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-769461      │ jenkins │ v1.37.0 │ 02 Oct 25 21:49 UTC │ 02 Oct 25 21:50 UTC │
	│ ssh     │ cert-options-769461 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-769461      │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ ssh     │ -p cert-options-769461 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-769461      │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ delete  │ -p cert-options-769461                                                                                                                                                                                                                        │ cert-options-769461      │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:50 UTC │
	│ start   │ -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:50 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p cert-expiration-955864 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-955864   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-714101 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │                     │
	│ stop    │ -p old-k8s-version-714101 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-714101 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:52 UTC │
	│ delete  │ -p cert-expiration-955864                                                                                                                                                                                                                     │ cert-expiration-955864   │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:53 UTC │
	│ image   │ old-k8s-version-714101 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ pause   │ -p old-k8s-version-714101 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	│ delete  │ -p old-k8s-version-714101                                                                                                                                                                                                                     │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ delete  │ -p old-k8s-version-714101                                                                                                                                                                                                                     │ old-k8s-version-714101   │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977       │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-661954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	│ stop    │ -p no-preload-661954 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ addons  │ enable dashboard -p no-preload-661954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ start   │ -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:54 UTC │
	│ image   │ no-preload-661954 image list --format=json                                                                                                                                                                                                    │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ pause   │ -p no-preload-661954 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-661954        │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-132977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-132977       │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:53:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:53:34.901160 1192467 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:53:34.901344 1192467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:53:34.901354 1192467 out.go:374] Setting ErrFile to fd 2...
	I1002 21:53:34.901359 1192467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:53:34.901603 1192467 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:53:34.901971 1192467 out.go:368] Setting JSON to false
	I1002 21:53:34.902943 1192467 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23752,"bootTime":1759418263,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:53:34.903008 1192467 start.go:140] virtualization:  
	I1002 21:53:34.906123 1192467 out.go:179] * [no-preload-661954] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:53:34.909913 1192467 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:53:34.909979 1192467 notify.go:221] Checking for updates...
	I1002 21:53:34.915971 1192467 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:53:34.918955 1192467 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:53:34.921858 1192467 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:53:34.925522 1192467 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:53:34.928444 1192467 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:53:34.931934 1192467 config.go:182] Loaded profile config "no-preload-661954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:53:34.932583 1192467 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:53:34.971588 1192467 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:53:34.971693 1192467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:53:35.062471 1192467 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:53:35.050304835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:53:35.062582 1192467 docker.go:319] overlay module found
	I1002 21:53:35.065722 1192467 out.go:179] * Using the docker driver based on existing profile
	I1002 21:53:30.791545 1189833 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:53:31.561972 1189833 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:53:31.562140 1189833 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-132977 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 21:53:32.528147 1189833 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:53:32.528501 1189833 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-132977 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 21:53:33.148400 1189833 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:53:33.396421 1189833 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:53:33.791661 1189833 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:53:33.792035 1189833 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:53:34.284468 1189833 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:53:35.068509 1192467 start.go:306] selected driver: docker
	I1002 21:53:35.068525 1192467 start.go:936] validating driver "docker" against &{Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:53:35.068625 1192467 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:53:35.069310 1192467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:53:35.175126 1192467 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:53:35.156629246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:53:35.175468 1192467 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:53:35.175494 1192467 cni.go:84] Creating CNI manager for ""
	I1002 21:53:35.175552 1192467 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:53:35.175584 1192467 start.go:350] cluster config:
	{Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:53:35.178968 1192467 out.go:179] * Starting "no-preload-661954" primary control-plane node in "no-preload-661954" cluster
	I1002 21:53:35.181760 1192467 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:53:35.184818 1192467 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:53:35.187591 1192467 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:53:35.187755 1192467 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/config.json ...
	I1002 21:53:35.188104 1192467 cache.go:107] acquiring lock: {Name:mk77546a797d48dfa87e4f15444ebfe2ae46de0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188183 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 21:53:35.188191 1192467 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.019µs
	I1002 21:53:35.188203 1192467 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 21:53:35.188217 1192467 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:53:35.188439 1192467 cache.go:107] acquiring lock: {Name:mkb30203224ed1c1a4b88d93d3aeb9a29d46fb20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188507 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1002 21:53:35.188515 1192467 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 80.859µs
	I1002 21:53:35.188521 1192467 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1002 21:53:35.188533 1192467 cache.go:107] acquiring lock: {Name:mk2aab2e3052911889ff3d13b07414606ffa2c8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188567 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1002 21:53:35.188572 1192467 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 41.386µs
	I1002 21:53:35.188578 1192467 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1002 21:53:35.188587 1192467 cache.go:107] acquiring lock: {Name:mkb1bbde6510d7fb66d3923ec81dcf1545e1aeca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188613 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1002 21:53:35.188618 1192467 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 32.73µs
	I1002 21:53:35.188624 1192467 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1002 21:53:35.188633 1192467 cache.go:107] acquiring lock: {Name:mk783e98a1246826a6f16b0bd25f720d93184154 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188658 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1002 21:53:35.188663 1192467 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.76µs
	I1002 21:53:35.188676 1192467 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1002 21:53:35.188687 1192467 cache.go:107] acquiring lock: {Name:mk232b04a28dc0f5922a8e36bb60d83a371a69dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188713 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1002 21:53:35.188717 1192467 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 31.606µs
	I1002 21:53:35.188723 1192467 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1002 21:53:35.188732 1192467 cache.go:107] acquiring lock: {Name:mk17c8111e11ff4babf675464dda89dffef8dccd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188757 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1002 21:53:35.188763 1192467 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.401µs
	I1002 21:53:35.188879 1192467 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1002 21:53:35.188898 1192467 cache.go:107] acquiring lock: {Name:mkb9b4c6e229a9543f9236d679c4b53878bc9ce5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.188953 1192467 cache.go:115] /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1002 21:53:35.188961 1192467 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 65.787µs
	I1002 21:53:35.188967 1192467 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1002 21:53:35.188974 1192467 cache.go:87] Successfully saved all images to host disk.
	I1002 21:53:35.209172 1192467 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:53:35.209192 1192467 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:53:35.209203 1192467 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:53:35.209225 1192467 start.go:361] acquireMachinesLock for no-preload-661954: {Name:mk6a385b42202eaf12d2e98c4a7f7a9c153c60e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:53:35.209273 1192467 start.go:365] duration metric: took 32.262µs to acquireMachinesLock for "no-preload-661954"
	I1002 21:53:35.209292 1192467 start.go:97] Skipping create...Using existing machine configuration
	I1002 21:53:35.209297 1192467 fix.go:55] fixHost starting: 
	I1002 21:53:35.209553 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:35.231660 1192467 fix.go:113] recreateIfNeeded on no-preload-661954: state=Stopped err=<nil>
	W1002 21:53:35.231690 1192467 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 21:53:35.146380 1189833 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:53:35.272785 1189833 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:53:36.887132 1189833 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:53:38.110579 1189833 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:53:38.111916 1189833 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:53:38.114470 1189833 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:53:35.234941 1192467 out.go:252] * Restarting existing docker container for "no-preload-661954" ...
	I1002 21:53:35.235048 1192467 cli_runner.go:164] Run: docker start no-preload-661954
	I1002 21:53:35.619228 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:35.647925 1192467 kic.go:430] container "no-preload-661954" state is running.
	I1002 21:53:35.648332 1192467 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-661954
	I1002 21:53:35.670854 1192467 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/config.json ...
	I1002 21:53:35.671096 1192467 machine.go:93] provisionDockerMachine start ...
	I1002 21:53:35.671161 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:35.703665 1192467 main.go:141] libmachine: Using SSH client type: native
	I1002 21:53:35.703994 1192467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I1002 21:53:35.704006 1192467 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:53:35.704610 1192467 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60724->127.0.0.1:34201: read: connection reset by peer
	I1002 21:53:38.857630 1192467 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-661954
	
	I1002 21:53:38.857699 1192467 ubuntu.go:182] provisioning hostname "no-preload-661954"
	I1002 21:53:38.857794 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:38.878845 1192467 main.go:141] libmachine: Using SSH client type: native
	I1002 21:53:38.879146 1192467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I1002 21:53:38.879163 1192467 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-661954 && echo "no-preload-661954" | sudo tee /etc/hostname
	I1002 21:53:39.021606 1192467 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-661954
	
	I1002 21:53:39.021702 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:39.040144 1192467 main.go:141] libmachine: Using SSH client type: native
	I1002 21:53:39.040465 1192467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I1002 21:53:39.040489 1192467 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-661954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-661954/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-661954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:53:39.174332 1192467 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:53:39.174356 1192467 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:53:39.174381 1192467 ubuntu.go:190] setting up certificates
	I1002 21:53:39.174390 1192467 provision.go:84] configureAuth start
	I1002 21:53:39.174462 1192467 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-661954
	I1002 21:53:39.199440 1192467 provision.go:143] copyHostCerts
	I1002 21:53:39.199504 1192467 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:53:39.199513 1192467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:53:39.199565 1192467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:53:39.199656 1192467 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:53:39.199661 1192467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:53:39.199687 1192467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:53:39.199745 1192467 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:53:39.199749 1192467 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:53:39.199783 1192467 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:53:39.199839 1192467 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.no-preload-661954 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-661954]
	I1002 21:53:39.732249 1192467 provision.go:177] copyRemoteCerts
	I1002 21:53:39.732321 1192467 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:53:39.732369 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:39.750662 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:39.860304 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:53:39.884742 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 21:53:38.118027 1189833 out.go:252]   - Booting up control plane ...
	I1002 21:53:38.118154 1189833 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:53:38.118243 1189833 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:53:38.118329 1189833 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:53:38.137079 1189833 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:53:38.137300 1189833 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:53:38.146003 1189833 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:53:38.146568 1189833 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:53:38.146815 1189833 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:53:38.276397 1189833 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:53:38.276520 1189833 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:53:38.790399 1189833 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 510.980423ms
	I1002 21:53:38.790928 1189833 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:53:38.791254 1189833 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 21:53:38.791540 1189833 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:53:38.792552 1189833 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:53:39.918483 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:53:39.949909 1192467 provision.go:87] duration metric: took 775.494692ms to configureAuth
	I1002 21:53:39.949940 1192467 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:53:39.950130 1192467 config.go:182] Loaded profile config "no-preload-661954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:53:39.950234 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:39.982165 1192467 main.go:141] libmachine: Using SSH client type: native
	I1002 21:53:39.982524 1192467 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I1002 21:53:39.982550 1192467 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:53:40.431478 1192467 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:53:40.431547 1192467 machine.go:96] duration metric: took 4.760440429s to provisionDockerMachine
	I1002 21:53:40.431574 1192467 start.go:294] postStartSetup for "no-preload-661954" (driver="docker")
	I1002 21:53:40.431603 1192467 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:53:40.431723 1192467 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:53:40.431800 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:40.460589 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:40.579352 1192467 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:53:40.582836 1192467 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:53:40.582872 1192467 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:53:40.582883 1192467 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:53:40.582946 1192467 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:53:40.583041 1192467 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:53:40.583155 1192467 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:53:40.591092 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:53:40.616605 1192467 start.go:297] duration metric: took 184.998596ms for postStartSetup
	I1002 21:53:40.616696 1192467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:53:40.616844 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:40.647561 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:40.750782 1192467 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:53:40.756954 1192467 fix.go:57] duration metric: took 5.547649748s for fixHost
	I1002 21:53:40.756981 1192467 start.go:84] releasing machines lock for "no-preload-661954", held for 5.547699282s
	I1002 21:53:40.757047 1192467 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-661954
	I1002 21:53:40.784891 1192467 ssh_runner.go:195] Run: cat /version.json
	I1002 21:53:40.784948 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:40.785190 1192467 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:53:40.785240 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:40.822484 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:40.822940 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:40.937967 1192467 ssh_runner.go:195] Run: systemctl --version
	I1002 21:53:41.061857 1192467 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:53:41.145969 1192467 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:53:41.151940 1192467 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:53:41.152019 1192467 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:53:41.165141 1192467 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:53:41.165186 1192467 start.go:496] detecting cgroup driver to use...
	I1002 21:53:41.165217 1192467 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:53:41.165275 1192467 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:53:41.188391 1192467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:53:41.213237 1192467 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:53:41.213309 1192467 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:53:41.238346 1192467 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:53:41.265240 1192467 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:53:41.496554 1192467 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:53:41.702649 1192467 docker.go:234] disabling docker service ...
	I1002 21:53:41.702738 1192467 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:53:41.723182 1192467 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:53:41.753668 1192467 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:53:41.948226 1192467 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:53:42.192815 1192467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:53:42.223559 1192467 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:53:42.251561 1192467 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:53:42.251654 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.267876 1192467 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:53:42.267981 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.285908 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.301305 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.318315 1192467 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:53:42.332682 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.345212 1192467 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.361714 1192467 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:53:42.378749 1192467 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:53:42.392058 1192467 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:53:42.404240 1192467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:53:42.610903 1192467 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:53:42.815298 1192467 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:53:42.815393 1192467 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:53:42.821814 1192467 start.go:564] Will wait 60s for crictl version
	I1002 21:53:42.821896 1192467 ssh_runner.go:195] Run: which crictl
	I1002 21:53:42.825340 1192467 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:53:42.877728 1192467 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:53:42.877820 1192467 ssh_runner.go:195] Run: crio --version
	I1002 21:53:42.940328 1192467 ssh_runner.go:195] Run: crio --version
	I1002 21:53:42.988804 1192467 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:53:42.991635 1192467 cli_runner.go:164] Run: docker network inspect no-preload-661954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:53:43.013876 1192467 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 21:53:43.017684 1192467 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:53:43.040354 1192467 kubeadm.go:883] updating cluster {Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:53:43.040474 1192467 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:53:43.040519 1192467 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:53:43.097583 1192467 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:53:43.097609 1192467 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:53:43.097617 1192467 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 21:53:43.097711 1192467 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-661954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:53:43.097796 1192467 ssh_runner.go:195] Run: crio config
	I1002 21:53:43.192119 1192467 cni.go:84] Creating CNI manager for ""
	I1002 21:53:43.192150 1192467 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:53:43.192168 1192467 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:53:43.192204 1192467 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-661954 NodeName:no-preload-661954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:53:43.192338 1192467 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-661954"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:53:43.192434 1192467 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:53:43.205178 1192467 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:53:43.205246 1192467 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:53:43.215550 1192467 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 21:53:43.239441 1192467 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:53:43.262457 1192467 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1002 21:53:43.284544 1192467 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:53:43.293407 1192467 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:53:43.307245 1192467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:53:43.506121 1192467 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:53:43.523524 1192467 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954 for IP: 192.168.85.2
	I1002 21:53:43.523545 1192467 certs.go:195] generating shared ca certs ...
	I1002 21:53:43.523561 1192467 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:43.523728 1192467 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:53:43.523791 1192467 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:53:43.523803 1192467 certs.go:257] generating profile certs ...
	I1002 21:53:43.523918 1192467 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.key
	I1002 21:53:43.523983 1192467 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key.ffe6e5b4
	I1002 21:53:43.524026 1192467 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.key
	I1002 21:53:43.524152 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:53:43.524198 1192467 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:53:43.524211 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:53:43.524234 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:53:43.524263 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:53:43.524302 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:53:43.524359 1192467 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:53:43.525092 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:53:43.586699 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:53:43.621808 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:53:43.686543 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:53:43.735499 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:53:43.762086 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:53:43.794944 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:53:43.868155 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:53:43.923880 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:53:43.951493 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:53:43.988190 1192467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:53:44.019229 1192467 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:53:44.046658 1192467 ssh_runner.go:195] Run: openssl version
	I1002 21:53:44.053428 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:53:44.063274 1192467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:53:44.070507 1192467 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:53:44.070596 1192467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:53:44.113319 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:53:44.122171 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:53:44.131226 1192467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:53:44.136447 1192467 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:53:44.136521 1192467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:53:44.182170 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:53:44.195482 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:53:44.207002 1192467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:53:44.211690 1192467 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:53:44.211780 1192467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:53:44.256193 1192467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:53:44.264627 1192467 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:53:44.268830 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:53:44.317092 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:53:44.387292 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:53:44.526916 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:53:44.730899 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:53:44.894226 1192467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:53:45.002059 1192467 kubeadm.go:400] StartCluster: {Name:no-preload-661954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-661954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:53:45.002171 1192467 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:53:45.002259 1192467 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:53:45.087963 1192467 cri.go:89] found id: "3cf04b502d36e04bbf37deadf1e0009a85fad1306e14b2a672afa8633ea20e98"
	I1002 21:53:45.088014 1192467 cri.go:89] found id: "c31f86dc038a7a8cda44b794e533fc5838b3adc20907a05388b6afdffa5ec270"
	I1002 21:53:45.088021 1192467 cri.go:89] found id: "88076a11fa43f4493d4db1e7eaa62ad0cda2f161d8d48715aea30b33259edee2"
	I1002 21:53:45.088025 1192467 cri.go:89] found id: "5cd95915db6182c9baece2f44a4bb2de93f9d4189ac588470b8045a3fc361f08"
	I1002 21:53:45.088037 1192467 cri.go:89] found id: ""
	I1002 21:53:45.088116 1192467 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 21:53:45.106135 1192467 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:53:45Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:53:45.106285 1192467 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:53:45.127163 1192467 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:53:45.127205 1192467 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:53:45.127315 1192467 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:53:45.145179 1192467 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:53:45.145726 1192467 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-661954" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:53:45.145883 1192467 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-992084/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-661954" cluster setting kubeconfig missing "no-preload-661954" context setting]
	I1002 21:53:45.146372 1192467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:45.148353 1192467 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:53:45.171240 1192467 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 21:53:45.171280 1192467 kubeadm.go:601] duration metric: took 44.0584ms to restartPrimaryControlPlane
	I1002 21:53:45.171301 1192467 kubeadm.go:402] duration metric: took 169.276623ms to StartCluster
	I1002 21:53:45.171317 1192467 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:45.171405 1192467 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:53:45.172141 1192467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:45.172397 1192467 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:53:45.172731 1192467 config.go:182] Loaded profile config "no-preload-661954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:53:45.172795 1192467 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:53:45.172941 1192467 addons.go:69] Setting storage-provisioner=true in profile "no-preload-661954"
	I1002 21:53:45.172962 1192467 addons.go:238] Setting addon storage-provisioner=true in "no-preload-661954"
	W1002 21:53:45.172971 1192467 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:53:45.172986 1192467 addons.go:69] Setting dashboard=true in profile "no-preload-661954"
	I1002 21:53:45.173070 1192467 addons.go:238] Setting addon dashboard=true in "no-preload-661954"
	W1002 21:53:45.173108 1192467 addons.go:247] addon dashboard should already be in state true
	I1002 21:53:45.173158 1192467 host.go:66] Checking if "no-preload-661954" exists ...
	I1002 21:53:45.172993 1192467 host.go:66] Checking if "no-preload-661954" exists ...
	I1002 21:53:45.173802 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:45.173831 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:45.173011 1192467 addons.go:69] Setting default-storageclass=true in profile "no-preload-661954"
	I1002 21:53:45.174417 1192467 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-661954"
	I1002 21:53:45.174758 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:45.178130 1192467 out.go:179] * Verifying Kubernetes components...
	I1002 21:53:45.184246 1192467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:53:45.223050 1192467 addons.go:238] Setting addon default-storageclass=true in "no-preload-661954"
	W1002 21:53:45.223076 1192467 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:53:45.223104 1192467 host.go:66] Checking if "no-preload-661954" exists ...
	I1002 21:53:45.223578 1192467 cli_runner.go:164] Run: docker container inspect no-preload-661954 --format={{.State.Status}}
	I1002 21:53:45.253512 1192467 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:53:45.256789 1192467 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:53:45.256819 1192467 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:53:45.256917 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:45.265209 1192467 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 21:53:45.272097 1192467 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 21:53:47.251888 1189833 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.458509189s
	I1002 21:53:48.346662 1189833 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 9.553935935s
	I1002 21:53:49.793477 1189833 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 11.001693502s
	I1002 21:53:49.824620 1189833 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:53:49.862218 1189833 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:53:49.880328 1189833 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:53:49.880555 1189833 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-132977 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 21:53:49.899333 1189833 kubeadm.go:318] [bootstrap-token] Using token: 21plum.6l6cs3s9kwcorv4m
	I1002 21:53:45.275305 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 21:53:45.275342 1192467 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 21:53:45.275420 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:45.296741 1192467 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:53:45.296773 1192467 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:53:45.296850 1192467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-661954
	I1002 21:53:45.321320 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:45.368395 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:45.374254 1192467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/no-preload-661954/id_rsa Username:docker}
	I1002 21:53:45.730839 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 21:53:45.730866 1192467 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 21:53:45.784541 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 21:53:45.784569 1192467 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 21:53:45.855767 1192467 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:53:45.865965 1192467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:53:45.869532 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 21:53:45.869557 1192467 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 21:53:45.885049 1192467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:53:45.961157 1192467 node_ready.go:35] waiting up to 6m0s for node "no-preload-661954" to be "Ready" ...
	I1002 21:53:45.963075 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 21:53:45.963130 1192467 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 21:53:46.130992 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 21:53:46.131067 1192467 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 21:53:46.259505 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 21:53:46.259570 1192467 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 21:53:46.362568 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 21:53:46.362647 1192467 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 21:53:46.404379 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 21:53:46.404444 1192467 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 21:53:46.439621 1192467 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:53:46.439701 1192467 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 21:53:46.463673 1192467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:53:49.902382 1189833 out.go:252]   - Configuring RBAC rules ...
	I1002 21:53:49.902508 1189833 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:53:49.914154 1189833 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:53:49.922679 1189833 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:53:49.927109 1189833 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:53:49.931429 1189833 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:53:49.936135 1189833 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:53:50.201066 1189833 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:53:50.702788 1189833 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 21:53:51.203959 1189833 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 21:53:51.205373 1189833 kubeadm.go:318] 
	I1002 21:53:51.205451 1189833 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 21:53:51.205465 1189833 kubeadm.go:318] 
	I1002 21:53:51.205549 1189833 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 21:53:51.205558 1189833 kubeadm.go:318] 
	I1002 21:53:51.205585 1189833 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 21:53:51.205645 1189833 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:53:51.205701 1189833 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:53:51.205710 1189833 kubeadm.go:318] 
	I1002 21:53:51.205763 1189833 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 21:53:51.205772 1189833 kubeadm.go:318] 
	I1002 21:53:51.205819 1189833 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 21:53:51.205826 1189833 kubeadm.go:318] 
	I1002 21:53:51.205878 1189833 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 21:53:51.205956 1189833 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:53:51.206027 1189833 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:53:51.206052 1189833 kubeadm.go:318] 
	I1002 21:53:51.206137 1189833 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:53:51.206218 1189833 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 21:53:51.206226 1189833 kubeadm.go:318] 
	I1002 21:53:51.206309 1189833 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 21plum.6l6cs3s9kwcorv4m \
	I1002 21:53:51.206415 1189833 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 \
	I1002 21:53:51.206439 1189833 kubeadm.go:318] 	--control-plane 
	I1002 21:53:51.206448 1189833 kubeadm.go:318] 
	I1002 21:53:51.206532 1189833 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:53:51.206541 1189833 kubeadm.go:318] 
	I1002 21:53:51.206629 1189833 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 21plum.6l6cs3s9kwcorv4m \
	I1002 21:53:51.206735 1189833 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 
	I1002 21:53:51.215083 1189833 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:53:51.215325 1189833 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:53:51.215466 1189833 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:53:51.215488 1189833 cni.go:84] Creating CNI manager for ""
	I1002 21:53:51.215496 1189833 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:53:51.218899 1189833 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 21:53:52.178341 1192467 node_ready.go:49] node "no-preload-661954" is "Ready"
	I1002 21:53:52.178366 1192467 node_ready.go:38] duration metric: took 6.217135309s for node "no-preload-661954" to be "Ready" ...
	I1002 21:53:52.178381 1192467 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:53:52.178441 1192467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:53:52.585784 1192467 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.719776673s)
	I1002 21:53:54.584206 1192467 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.699121666s)
	I1002 21:53:54.614118 1192467 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.150352846s)
	I1002 21:53:54.614340 1192467 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.435886189s)
	I1002 21:53:54.614380 1192467 api_server.go:72] duration metric: took 9.441950505s to wait for apiserver process to appear ...
	I1002 21:53:54.614401 1192467 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:53:54.614430 1192467 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:53:54.617022 1192467 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-661954 addons enable metrics-server
	
	I1002 21:53:54.620054 1192467 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1002 21:53:54.622900 1192467 addons.go:514] duration metric: took 9.450101363s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1002 21:53:54.630627 1192467 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:53:54.630697 1192467 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:53:51.221764 1189833 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:53:51.226028 1189833 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 21:53:51.226066 1189833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 21:53:51.254968 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:53:51.939290 1189833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:53:51.939413 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:51.939489 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-132977 minikube.k8s.io/updated_at=2025_10_02T21_53_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b minikube.k8s.io/name=embed-certs-132977 minikube.k8s.io/primary=true
	I1002 21:53:52.314456 1189833 ops.go:34] apiserver oom_adj: -16
	I1002 21:53:52.314561 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:52.815637 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:53.315233 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:53.814746 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:54.314889 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:54.814670 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:55.315642 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:55.815157 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:56.315270 1189833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:53:56.433779 1189833 kubeadm.go:1113] duration metric: took 4.494413139s to wait for elevateKubeSystemPrivileges
	I1002 21:53:56.433817 1189833 kubeadm.go:402] duration metric: took 27.86764968s to StartCluster
	I1002 21:53:56.433835 1189833 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:56.433900 1189833 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:53:56.435285 1189833 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:53:56.435511 1189833 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:53:56.435638 1189833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 21:53:56.435896 1189833 config.go:182] Loaded profile config "embed-certs-132977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:53:56.435933 1189833 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:53:56.435991 1189833 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-132977"
	I1002 21:53:56.436007 1189833 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-132977"
	I1002 21:53:56.436027 1189833 host.go:66] Checking if "embed-certs-132977" exists ...
	I1002 21:53:56.436540 1189833 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:53:56.436923 1189833 addons.go:69] Setting default-storageclass=true in profile "embed-certs-132977"
	I1002 21:53:56.436947 1189833 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-132977"
	I1002 21:53:56.437223 1189833 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:53:56.439647 1189833 out.go:179] * Verifying Kubernetes components...
	I1002 21:53:56.443344 1189833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:53:56.476650 1189833 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:53:56.478543 1189833 addons.go:238] Setting addon default-storageclass=true in "embed-certs-132977"
	I1002 21:53:56.478584 1189833 host.go:66] Checking if "embed-certs-132977" exists ...
	I1002 21:53:56.479128 1189833 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:53:56.479690 1189833 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:53:56.479712 1189833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:53:56.479769 1189833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:53:56.524116 1189833 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:53:56.524137 1189833 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:53:56.524204 1189833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:53:56.524618 1189833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34196 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:53:56.565467 1189833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34196 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:53:56.834821 1189833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 21:53:56.927271 1189833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:53:56.955009 1189833 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:53:56.973420 1189833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:53:57.566058 1189833 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1002 21:53:57.900359 1189833 node_ready.go:35] waiting up to 6m0s for node "embed-certs-132977" to be "Ready" ...
	I1002 21:53:57.915026 1189833 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 21:53:55.114484 1192467 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 21:53:55.123286 1192467 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 21:53:55.124661 1192467 api_server.go:141] control plane version: v1.34.1
	I1002 21:53:55.124693 1192467 api_server.go:131] duration metric: took 510.273967ms to wait for apiserver health ...
	I1002 21:53:55.124703 1192467 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:53:55.128660 1192467 system_pods.go:59] 8 kube-system pods found
	I1002 21:53:55.128703 1192467 system_pods.go:61] "coredns-66bc5c9577-ddsr2" [af6a1936-ec5c-4c31-9f22-73cc6f7042c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:53:55.128760 1192467 system_pods.go:61] "etcd-no-preload-661954" [496cecf9-be4d-453d-aee5-11ec78563118] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:53:55.128777 1192467 system_pods.go:61] "kindnet-flmgm" [bb04d59b-0f2b-44db-bbd1-53d35a0d1406] Running
	I1002 21:53:55.128794 1192467 system_pods.go:61] "kube-apiserver-no-preload-661954" [73de157f-6464-4c1a-8de1-87a4fff66c68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:53:55.128828 1192467 system_pods.go:61] "kube-controller-manager-no-preload-661954" [eda68d1f-9389-46d7-88c2-3f2017805cf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:53:55.128841 1192467 system_pods.go:61] "kube-proxy-5jstv" [51774e0f-371e-4a31-801f-9ca681eefe74] Running
	I1002 21:53:55.128879 1192467 system_pods.go:61] "kube-scheduler-no-preload-661954" [2d738e6b-41aa-46db-b8a7-3d4a2e34487a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:53:55.128911 1192467 system_pods.go:61] "storage-provisioner" [862f39f9-ad9b-4268-86f6-775e9221224b] Running
	I1002 21:53:55.128919 1192467 system_pods.go:74] duration metric: took 4.210506ms to wait for pod list to return data ...
	I1002 21:53:55.128954 1192467 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:53:55.132297 1192467 default_sa.go:45] found service account: "default"
	I1002 21:53:55.132328 1192467 default_sa.go:55] duration metric: took 3.360478ms for default service account to be created ...
	I1002 21:53:55.132341 1192467 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:53:55.136969 1192467 system_pods.go:86] 8 kube-system pods found
	I1002 21:53:55.137010 1192467 system_pods.go:89] "coredns-66bc5c9577-ddsr2" [af6a1936-ec5c-4c31-9f22-73cc6f7042c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:53:55.137026 1192467 system_pods.go:89] "etcd-no-preload-661954" [496cecf9-be4d-453d-aee5-11ec78563118] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:53:55.137034 1192467 system_pods.go:89] "kindnet-flmgm" [bb04d59b-0f2b-44db-bbd1-53d35a0d1406] Running
	I1002 21:53:55.137042 1192467 system_pods.go:89] "kube-apiserver-no-preload-661954" [73de157f-6464-4c1a-8de1-87a4fff66c68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:53:55.137053 1192467 system_pods.go:89] "kube-controller-manager-no-preload-661954" [eda68d1f-9389-46d7-88c2-3f2017805cf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:53:55.137062 1192467 system_pods.go:89] "kube-proxy-5jstv" [51774e0f-371e-4a31-801f-9ca681eefe74] Running
	I1002 21:53:55.137069 1192467 system_pods.go:89] "kube-scheduler-no-preload-661954" [2d738e6b-41aa-46db-b8a7-3d4a2e34487a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:53:55.137078 1192467 system_pods.go:89] "storage-provisioner" [862f39f9-ad9b-4268-86f6-775e9221224b] Running
	I1002 21:53:55.137087 1192467 system_pods.go:126] duration metric: took 4.740634ms to wait for k8s-apps to be running ...
	I1002 21:53:55.137100 1192467 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:53:55.137170 1192467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:53:55.158776 1192467 system_svc.go:56] duration metric: took 21.666236ms WaitForService to wait for kubelet
	I1002 21:53:55.158878 1192467 kubeadm.go:586] duration metric: took 9.986436313s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:53:55.158941 1192467 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:53:55.162488 1192467 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:53:55.162583 1192467 node_conditions.go:123] node cpu capacity is 2
	I1002 21:53:55.162612 1192467 node_conditions.go:105] duration metric: took 3.648453ms to run NodePressure ...
	I1002 21:53:55.162651 1192467 start.go:242] waiting for startup goroutines ...
	I1002 21:53:55.162679 1192467 start.go:247] waiting for cluster config update ...
	I1002 21:53:55.162704 1192467 start.go:256] writing updated cluster config ...
	I1002 21:53:55.163077 1192467 ssh_runner.go:195] Run: rm -f paused
	I1002 21:53:55.167590 1192467 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:53:55.171504 1192467 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ddsr2" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 21:53:57.180373 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:53:59.678019 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	I1002 21:53:57.918188 1189833 addons.go:514] duration metric: took 1.482248421s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 21:53:58.071227 1189833 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-132977" context rescaled to 1 replicas
	W1002 21:53:59.905322 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:01.681466 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:04.177105 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:02.403445 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:04.405558 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:06.179313 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:08.678604 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:06.903317 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:09.403229 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:11.176586 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:13.677184 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:11.403384 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:13.903569 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:15.678985 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:18.177426 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:15.904067 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:18.403552 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:20.678796 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:23.177446 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:20.903769 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:22.903999 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:24.904140 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:25.178291 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:27.677592 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:26.904328 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:29.403133 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:30.177272 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	W1002 21:54:32.677201 1192467 pod_ready.go:104] pod "coredns-66bc5c9577-ddsr2" is not "Ready", error: <nil>
	I1002 21:54:33.176891 1192467 pod_ready.go:94] pod "coredns-66bc5c9577-ddsr2" is "Ready"
	I1002 21:54:33.176922 1192467 pod_ready.go:86] duration metric: took 38.005343021s for pod "coredns-66bc5c9577-ddsr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.179511 1192467 pod_ready.go:83] waiting for pod "etcd-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.184943 1192467 pod_ready.go:94] pod "etcd-no-preload-661954" is "Ready"
	I1002 21:54:33.184972 1192467 pod_ready.go:86] duration metric: took 5.432776ms for pod "etcd-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.187710 1192467 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.196382 1192467 pod_ready.go:94] pod "kube-apiserver-no-preload-661954" is "Ready"
	I1002 21:54:33.196413 1192467 pod_ready.go:86] duration metric: took 8.671641ms for pod "kube-apiserver-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.198899 1192467 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.375336 1192467 pod_ready.go:94] pod "kube-controller-manager-no-preload-661954" is "Ready"
	I1002 21:54:33.375367 1192467 pod_ready.go:86] duration metric: took 176.436003ms for pod "kube-controller-manager-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.575930 1192467 pod_ready.go:83] waiting for pod "kube-proxy-5jstv" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:33.975363 1192467 pod_ready.go:94] pod "kube-proxy-5jstv" is "Ready"
	I1002 21:54:33.975393 1192467 pod_ready.go:86] duration metric: took 399.437804ms for pod "kube-proxy-5jstv" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:34.180551 1192467 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:34.575430 1192467 pod_ready.go:94] pod "kube-scheduler-no-preload-661954" is "Ready"
	I1002 21:54:34.575460 1192467 pod_ready.go:86] duration metric: took 394.885383ms for pod "kube-scheduler-no-preload-661954" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:34.575481 1192467 pod_ready.go:40] duration metric: took 39.407775252s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:54:34.631486 1192467 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:54:34.634421 1192467 out.go:179] * Done! kubectl is now configured to use "no-preload-661954" cluster and "default" namespace by default
	W1002 21:54:31.403472 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:33.903536 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:35.903696 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	W1002 21:54:37.903870 1189833 node_ready.go:57] node "embed-certs-132977" has "Ready":"False" status (will retry)
	I1002 21:54:38.404715 1189833 node_ready.go:49] node "embed-certs-132977" is "Ready"
	I1002 21:54:38.404740 1189833 node_ready.go:38] duration metric: took 40.504339879s for node "embed-certs-132977" to be "Ready" ...
	I1002 21:54:38.404753 1189833 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:54:38.404814 1189833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:54:38.433291 1189833 api_server.go:72] duration metric: took 41.997752118s to wait for apiserver process to appear ...
	I1002 21:54:38.433313 1189833 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:54:38.433332 1189833 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:54:38.445543 1189833 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 21:54:38.446830 1189833 api_server.go:141] control plane version: v1.34.1
	I1002 21:54:38.446852 1189833 api_server.go:131] duration metric: took 13.531475ms to wait for apiserver health ...
	I1002 21:54:38.446860 1189833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:54:38.450727 1189833 system_pods.go:59] 8 kube-system pods found
	I1002 21:54:38.450758 1189833 system_pods.go:61] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:54:38.450765 1189833 system_pods.go:61] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running
	I1002 21:54:38.450771 1189833 system_pods.go:61] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:54:38.450775 1189833 system_pods.go:61] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running
	I1002 21:54:38.450784 1189833 system_pods.go:61] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running
	I1002 21:54:38.450789 1189833 system_pods.go:61] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:54:38.450793 1189833 system_pods.go:61] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running
	I1002 21:54:38.450799 1189833 system_pods.go:61] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:54:38.450805 1189833 system_pods.go:74] duration metric: took 3.939416ms to wait for pod list to return data ...
	I1002 21:54:38.450813 1189833 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:54:38.454525 1189833 default_sa.go:45] found service account: "default"
	I1002 21:54:38.454544 1189833 default_sa.go:55] duration metric: took 3.725851ms for default service account to be created ...
	I1002 21:54:38.454554 1189833 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:54:38.457911 1189833 system_pods.go:86] 8 kube-system pods found
	I1002 21:54:38.457941 1189833 system_pods.go:89] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:54:38.457949 1189833 system_pods.go:89] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running
	I1002 21:54:38.457955 1189833 system_pods.go:89] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:54:38.457959 1189833 system_pods.go:89] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running
	I1002 21:54:38.457964 1189833 system_pods.go:89] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running
	I1002 21:54:38.457968 1189833 system_pods.go:89] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:54:38.457971 1189833 system_pods.go:89] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running
	I1002 21:54:38.457977 1189833 system_pods.go:89] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:54:38.457997 1189833 retry.go:31] will retry after 282.68274ms: missing components: kube-dns
	I1002 21:54:38.745579 1189833 system_pods.go:86] 8 kube-system pods found
	I1002 21:54:38.745667 1189833 system_pods.go:89] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:54:38.745700 1189833 system_pods.go:89] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running
	I1002 21:54:38.745714 1189833 system_pods.go:89] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:54:38.745720 1189833 system_pods.go:89] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running
	I1002 21:54:38.745725 1189833 system_pods.go:89] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running
	I1002 21:54:38.745730 1189833 system_pods.go:89] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:54:38.745734 1189833 system_pods.go:89] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running
	I1002 21:54:38.745740 1189833 system_pods.go:89] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:54:38.745759 1189833 retry.go:31] will retry after 289.646816ms: missing components: kube-dns
	I1002 21:54:39.039529 1189833 system_pods.go:86] 8 kube-system pods found
	I1002 21:54:39.039556 1189833 system_pods.go:89] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:54:39.039562 1189833 system_pods.go:89] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running
	I1002 21:54:39.039569 1189833 system_pods.go:89] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:54:39.039573 1189833 system_pods.go:89] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running
	I1002 21:54:39.039578 1189833 system_pods.go:89] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running
	I1002 21:54:39.039581 1189833 system_pods.go:89] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:54:39.039585 1189833 system_pods.go:89] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running
	I1002 21:54:39.039591 1189833 system_pods.go:89] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:54:39.039605 1189833 retry.go:31] will retry after 417.217485ms: missing components: kube-dns
	I1002 21:54:39.461452 1189833 system_pods.go:86] 8 kube-system pods found
	I1002 21:54:39.461501 1189833 system_pods.go:89] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Running
	I1002 21:54:39.461509 1189833 system_pods.go:89] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running
	I1002 21:54:39.461513 1189833 system_pods.go:89] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:54:39.461518 1189833 system_pods.go:89] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running
	I1002 21:54:39.461541 1189833 system_pods.go:89] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running
	I1002 21:54:39.461554 1189833 system_pods.go:89] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:54:39.461573 1189833 system_pods.go:89] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running
	I1002 21:54:39.461584 1189833 system_pods.go:89] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Running
	I1002 21:54:39.461594 1189833 system_pods.go:126] duration metric: took 1.007033707s to wait for k8s-apps to be running ...
	I1002 21:54:39.461604 1189833 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:54:39.461671 1189833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:54:39.478598 1189833 system_svc.go:56] duration metric: took 16.985989ms WaitForService to wait for kubelet
	I1002 21:54:39.478670 1189833 kubeadm.go:586] duration metric: took 43.043135125s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:54:39.478704 1189833 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:54:39.482160 1189833 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:54:39.482195 1189833 node_conditions.go:123] node cpu capacity is 2
	I1002 21:54:39.482210 1189833 node_conditions.go:105] duration metric: took 3.499272ms to run NodePressure ...
	I1002 21:54:39.482223 1189833 start.go:242] waiting for startup goroutines ...
	I1002 21:54:39.482230 1189833 start.go:247] waiting for cluster config update ...
	I1002 21:54:39.482242 1189833 start.go:256] writing updated cluster config ...
	I1002 21:54:39.482538 1189833 ssh_runner.go:195] Run: rm -f paused
	I1002 21:54:39.486611 1189833 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:54:39.561128 1189833 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rl5vq" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.566358 1189833 pod_ready.go:94] pod "coredns-66bc5c9577-rl5vq" is "Ready"
	I1002 21:54:39.566389 1189833 pod_ready.go:86] duration metric: took 5.230919ms for pod "coredns-66bc5c9577-rl5vq" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.568792 1189833 pod_ready.go:83] waiting for pod "etcd-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.573727 1189833 pod_ready.go:94] pod "etcd-embed-certs-132977" is "Ready"
	I1002 21:54:39.573755 1189833 pod_ready.go:86] duration metric: took 4.934738ms for pod "etcd-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.576177 1189833 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.580896 1189833 pod_ready.go:94] pod "kube-apiserver-embed-certs-132977" is "Ready"
	I1002 21:54:39.580922 1189833 pod_ready.go:86] duration metric: took 4.714781ms for pod "kube-apiserver-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.583217 1189833 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:39.892636 1189833 pod_ready.go:94] pod "kube-controller-manager-embed-certs-132977" is "Ready"
	I1002 21:54:39.892665 1189833 pod_ready.go:86] duration metric: took 309.422099ms for pod "kube-controller-manager-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:40.096570 1189833 pod_ready.go:83] waiting for pod "kube-proxy-rslfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:40.492183 1189833 pod_ready.go:94] pod "kube-proxy-rslfw" is "Ready"
	I1002 21:54:40.492212 1189833 pod_ready.go:86] duration metric: took 395.615555ms for pod "kube-proxy-rslfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:40.692648 1189833 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:41.092952 1189833 pod_ready.go:94] pod "kube-scheduler-embed-certs-132977" is "Ready"
	I1002 21:54:41.092979 1189833 pod_ready.go:86] duration metric: took 400.302152ms for pod "kube-scheduler-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:54:41.092991 1189833 pod_ready.go:40] duration metric: took 1.606349041s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:54:41.150287 1189833 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:54:41.156763 1189833 out.go:179] * Done! kubectl is now configured to use "embed-certs-132977" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 21:54:38 embed-certs-132977 crio[839]: time="2025-10-02T21:54:38.392784154Z" level=info msg="Created container 02fc1551039411f0a7fd0437fab62fd1f0878708de3d1d0ebbe03a0b7d1d50f3: kube-system/coredns-66bc5c9577-rl5vq/coredns" id=335eba4e-93de-4970-af6b-ff808d5f6eda name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:54:38 embed-certs-132977 crio[839]: time="2025-10-02T21:54:38.398395888Z" level=info msg="Starting container: 02fc1551039411f0a7fd0437fab62fd1f0878708de3d1d0ebbe03a0b7d1d50f3" id=d7455bf7-472d-4e8f-91de-80bbcdc6e0cb name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:54:38 embed-certs-132977 crio[839]: time="2025-10-02T21:54:38.420264678Z" level=info msg="Started container" PID=1749 containerID=02fc1551039411f0a7fd0437fab62fd1f0878708de3d1d0ebbe03a0b7d1d50f3 description=kube-system/coredns-66bc5c9577-rl5vq/coredns id=d7455bf7-472d-4e8f-91de-80bbcdc6e0cb name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a3cdd90d57f33ccb97ccabb8f2689dab5ea63bac9e2f774aee2393816396679
	Oct 02 21:54:41 embed-certs-132977 crio[839]: time="2025-10-02T21:54:41.68833534Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c2120c02-2130-4f50-8670-b14cfb01b1aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:54:41 embed-certs-132977 crio[839]: time="2025-10-02T21:54:41.688496272Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:54:41 embed-certs-132977 crio[839]: time="2025-10-02T21:54:41.698985997Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:522bf77cfc5c443909ca83ec876462a3e06529ae7f4a9ae9caad6cbe53b8758e UID:9f6d78c1-117c-4139-bd07-281c745fef52 NetNS:/var/run/netns/004f1c3e-5689-4b75-9043-8c78d50dadba Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d608}] Aliases:map[]}"
	Oct 02 21:54:41 embed-certs-132977 crio[839]: time="2025-10-02T21:54:41.699035736Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 02 21:54:41 embed-certs-132977 crio[839]: time="2025-10-02T21:54:41.70999891Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:522bf77cfc5c443909ca83ec876462a3e06529ae7f4a9ae9caad6cbe53b8758e UID:9f6d78c1-117c-4139-bd07-281c745fef52 NetNS:/var/run/netns/004f1c3e-5689-4b75-9043-8c78d50dadba Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d608}] Aliases:map[]}"
	Oct 02 21:54:41 embed-certs-132977 crio[839]: time="2025-10-02T21:54:41.710195311Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 02 21:54:41 embed-certs-132977 crio[839]: time="2025-10-02T21:54:41.714142185Z" level=info msg="Ran pod sandbox 522bf77cfc5c443909ca83ec876462a3e06529ae7f4a9ae9caad6cbe53b8758e with infra container: default/busybox/POD" id=c2120c02-2130-4f50-8670-b14cfb01b1aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:54:41 embed-certs-132977 crio[839]: time="2025-10-02T21:54:41.715256018Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5a7d0669-d979-49ef-93fa-97b09136871e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:54:41 embed-certs-132977 crio[839]: time="2025-10-02T21:54:41.715383506Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5a7d0669-d979-49ef-93fa-97b09136871e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:54:41 embed-certs-132977 crio[839]: time="2025-10-02T21:54:41.715430061Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5a7d0669-d979-49ef-93fa-97b09136871e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:54:41 embed-certs-132977 crio[839]: time="2025-10-02T21:54:41.717423485Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8c10be3e-a7f2-4a06-a640-bf059c5efafe name=/runtime.v1.ImageService/PullImage
	Oct 02 21:54:41 embed-certs-132977 crio[839]: time="2025-10-02T21:54:41.719116348Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 02 21:54:43 embed-certs-132977 crio[839]: time="2025-10-02T21:54:43.815068969Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=8c10be3e-a7f2-4a06-a640-bf059c5efafe name=/runtime.v1.ImageService/PullImage
	Oct 02 21:54:43 embed-certs-132977 crio[839]: time="2025-10-02T21:54:43.815767576Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4024d5ca-4bd1-4bf9-b563-03c455d0eacc name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:54:43 embed-certs-132977 crio[839]: time="2025-10-02T21:54:43.819107796Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ea0e0af6-6e7f-44cb-a299-3d20d09fb6eb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:54:43 embed-certs-132977 crio[839]: time="2025-10-02T21:54:43.826707161Z" level=info msg="Creating container: default/busybox/busybox" id=c5397571-e321-4a5d-b83b-46b1ca24fa78 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:54:43 embed-certs-132977 crio[839]: time="2025-10-02T21:54:43.827497352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:54:43 embed-certs-132977 crio[839]: time="2025-10-02T21:54:43.832156774Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:54:43 embed-certs-132977 crio[839]: time="2025-10-02T21:54:43.832626349Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:54:43 embed-certs-132977 crio[839]: time="2025-10-02T21:54:43.848804803Z" level=info msg="Created container fc875073135de5f601e3468a1b0f4f04d851155526231baee026597b6bce51c7: default/busybox/busybox" id=c5397571-e321-4a5d-b83b-46b1ca24fa78 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:54:43 embed-certs-132977 crio[839]: time="2025-10-02T21:54:43.850154126Z" level=info msg="Starting container: fc875073135de5f601e3468a1b0f4f04d851155526231baee026597b6bce51c7" id=3434cf92-e57a-4750-bee5-8cec2a40afa2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:54:43 embed-certs-132977 crio[839]: time="2025-10-02T21:54:43.851720796Z" level=info msg="Started container" PID=1810 containerID=fc875073135de5f601e3468a1b0f4f04d851155526231baee026597b6bce51c7 description=default/busybox/busybox id=3434cf92-e57a-4750-bee5-8cec2a40afa2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=522bf77cfc5c443909ca83ec876462a3e06529ae7f4a9ae9caad6cbe53b8758e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	fc875073135de       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   522bf77cfc5c4       busybox                                      default
	02fc155103941       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   7a3cdd90d57f3       coredns-66bc5c9577-rl5vq                     kube-system
	7e0d8b3766c1c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   7910c5d4f15fe       storage-provisioner                          kube-system
	fefdf8d6ccb74       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   6a84e1f734aef       kube-proxy-rslfw                             kube-system
	b964c941c06bd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   e3eacc55a89d5       kindnet-p845j                                kube-system
	52bd30f6083cc       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   d4e288445af11       kube-scheduler-embed-certs-132977            kube-system
	d953c91e49d5b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   e2c8a617c63ef       kube-controller-manager-embed-certs-132977   kube-system
	4b4578200a41a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   74aa7a5c94a9c       kube-apiserver-embed-certs-132977            kube-system
	6cf1cd50ff527       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   a8a4ee6bf190f       etcd-embed-certs-132977                      kube-system
	
	
	==> coredns [02fc1551039411f0a7fd0437fab62fd1f0878708de3d1d0ebbe03a0b7d1d50f3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46125 - 54622 "HINFO IN 7535472285758277761.7098127912441478057. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041830422s
	
	
	==> describe nodes <==
	Name:               embed-certs-132977
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-132977
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=embed-certs-132977
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_53_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:53:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-132977
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:54:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:54:52 +0000   Thu, 02 Oct 2025 21:53:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:54:52 +0000   Thu, 02 Oct 2025 21:53:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:54:52 +0000   Thu, 02 Oct 2025 21:53:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:54:52 +0000   Thu, 02 Oct 2025 21:54:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-132977
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 db52c2ec178c4128b99222e5a4fbc47a
	  System UUID:                3db3ea42-8592-4f96-865b-e348406b1a8e
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-rl5vq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-132977                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         64s
	  kube-system                 kindnet-p845j                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-132977             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-embed-certs-132977    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-rslfw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-132977             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 74s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 74s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node embed-certs-132977 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node embed-certs-132977 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s (x8 over 74s)  kubelet          Node embed-certs-132977 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-132977 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-132977 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-132977 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-132977 event: Registered Node embed-certs-132977 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-132977 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 21:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:18] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:21] overlayfs: idmapped layers are currently not supported
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +39.734560] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[  +6.128952] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[  +5.098616] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6cf1cd50ff5275864ab5bb1604073ebea10bc60efc8540bd1e118d91c8f87080] <==
	{"level":"warn","ts":"2025-10-02T21:53:43.466810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:43.519264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:43.557248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:43.605668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:43.656022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:43.693389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:43.731786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:43.833466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:43.849500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:43.913127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:43.942565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:43.987205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:44.022443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:44.071817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:44.108828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:44.158600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:44.197689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:44.228265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:44.323547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:44.409213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:44.490154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:44.518015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:44.542643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:44.588841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:53:44.809511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44750","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:54:52 up  6:37,  0 user,  load average: 4.29, 3.15, 2.11
	Linux embed-certs-132977 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b964c941c06bd349fbe4222c700c179f2b6c3c454ca99828e641b6f646b3588d] <==
	I1002 21:53:57.511952       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:53:57.513347       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 21:53:57.513522       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:53:57.513534       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:53:57.513548       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:53:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:53:57.714639       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:53:57.714721       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:53:57.714776       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:53:57.715776       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:54:27.715679       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 21:54:27.715805       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 21:54:27.715814       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:54:27.715890       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1002 21:54:29.314993       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:54:29.315025       1 metrics.go:72] Registering metrics
	I1002 21:54:29.315091       1 controller.go:711] "Syncing nftables rules"
	I1002 21:54:37.718195       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:54:37.718250       1 main.go:301] handling current node
	I1002 21:54:47.715905       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:54:47.715949       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4b4578200a41abed11b1d6003747784242da4da46d1bd079448dd2e1db46dae6] <==
	I1002 21:53:47.134895       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 21:53:47.157424       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 21:53:47.252689       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:53:47.252803       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1002 21:53:47.301922       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:53:47.303461       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:53:47.384289       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:53:47.682288       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 21:53:47.725986       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 21:53:47.726014       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:53:49.284951       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:53:49.353855       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:53:49.464063       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 21:53:49.472868       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1002 21:53:49.474007       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 21:53:49.483316       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:53:50.408089       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:53:50.654007       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:53:50.701577       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 21:53:50.726700       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 21:53:56.155515       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:53:56.161654       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:53:56.312172       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 21:53:56.546286       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1002 21:54:50.561366       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:50170: use of closed network connection
	
	
	==> kube-controller-manager [d953c91e49d5b4351d7d7234939140c78cb3d24706756536846220d989a7abc4] <==
	I1002 21:53:55.442598       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:53:55.443787       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 21:53:55.443810       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 21:53:55.443902       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 21:53:55.444721       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 21:53:55.451413       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-132977" podCIDRs=["10.244.0.0/24"]
	I1002 21:53:55.454640       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 21:53:55.457234       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 21:53:55.458490       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:53:55.458517       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:53:55.458524       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:53:55.459557       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 21:53:55.459642       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 21:53:55.487553       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 21:53:55.494229       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 21:53:55.494339       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 21:53:55.494386       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 21:53:55.494495       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 21:53:55.494533       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 21:53:55.494667       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 21:53:55.494923       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 21:53:55.494989       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 21:53:55.495035       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 21:53:55.497005       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:54:40.444800       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [fefdf8d6ccb743829e85a40e2b48bd8e513597322e72bbf8da86a7792ee8a5fe] <==
	I1002 21:53:57.703662       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:53:57.803699       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:53:57.906345       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:53:57.906560       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 21:53:57.906638       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:53:57.949412       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:53:57.949559       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:53:57.962918       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:53:57.963226       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:53:57.963248       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:53:57.964698       1 config.go:200] "Starting service config controller"
	I1002 21:53:57.964718       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:53:57.964735       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:53:57.964740       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:53:57.964750       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:53:57.964761       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:53:57.968091       1 config.go:309] "Starting node config controller"
	I1002 21:53:57.968109       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:53:57.968116       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:53:58.065606       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:53:58.065704       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:53:58.065724       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [52bd30f6083cccf26d1689c7061f3d786a0991272476f66f57f9fcc4c23a58b2] <==
	E1002 21:53:47.227413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 21:53:47.237732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 21:53:47.237857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 21:53:47.237975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 21:53:47.238734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 21:53:48.119395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 21:53:48.191133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 21:53:48.262952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 21:53:48.277680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 21:53:48.288269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 21:53:48.335381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 21:53:48.362198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 21:53:48.367652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 21:53:48.380882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 21:53:48.504469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 21:53:48.534242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 21:53:48.537289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 21:53:48.576682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:53:48.581118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 21:53:48.612782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 21:53:48.710990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 21:53:48.850238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 21:53:48.850349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 21:53:48.850426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1002 21:53:49.964881       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 21:53:52 embed-certs-132977 kubelet[1323]: I1002 21:53:52.282863    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-132977" podStartSLOduration=2.282842634 podStartE2EDuration="2.282842634s" podCreationTimestamp="2025-10-02 21:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:53:52.222310173 +0000 UTC m=+1.629900471" watchObservedRunningTime="2025-10-02 21:53:52.282842634 +0000 UTC m=+1.690432924"
	Oct 02 21:53:52 embed-certs-132977 kubelet[1323]: I1002 21:53:52.326880    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-132977" podStartSLOduration=1.326862955 podStartE2EDuration="1.326862955s" podCreationTimestamp="2025-10-02 21:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:53:52.291053462 +0000 UTC m=+1.698643760" watchObservedRunningTime="2025-10-02 21:53:52.326862955 +0000 UTC m=+1.734453278"
	Oct 02 21:53:55 embed-certs-132977 kubelet[1323]: I1002 21:53:55.453286    1323 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 02 21:53:55 embed-certs-132977 kubelet[1323]: I1002 21:53:55.453876    1323 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 21:53:56 embed-certs-132977 kubelet[1323]: I1002 21:53:56.786274    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/39333658-6649-4b03-931d-8a103fd98391-kube-proxy\") pod \"kube-proxy-rslfw\" (UID: \"39333658-6649-4b03-931d-8a103fd98391\") " pod="kube-system/kube-proxy-rslfw"
	Oct 02 21:53:56 embed-certs-132977 kubelet[1323]: I1002 21:53:56.786325    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39333658-6649-4b03-931d-8a103fd98391-xtables-lock\") pod \"kube-proxy-rslfw\" (UID: \"39333658-6649-4b03-931d-8a103fd98391\") " pod="kube-system/kube-proxy-rslfw"
	Oct 02 21:53:56 embed-certs-132977 kubelet[1323]: I1002 21:53:56.786352    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39333658-6649-4b03-931d-8a103fd98391-lib-modules\") pod \"kube-proxy-rslfw\" (UID: \"39333658-6649-4b03-931d-8a103fd98391\") " pod="kube-system/kube-proxy-rslfw"
	Oct 02 21:53:56 embed-certs-132977 kubelet[1323]: I1002 21:53:56.786373    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9b859d12-4b29-40f6-92a9-f8c597b013db-cni-cfg\") pod \"kindnet-p845j\" (UID: \"9b859d12-4b29-40f6-92a9-f8c597b013db\") " pod="kube-system/kindnet-p845j"
	Oct 02 21:53:56 embed-certs-132977 kubelet[1323]: I1002 21:53:56.786410    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49r6m\" (UniqueName: \"kubernetes.io/projected/39333658-6649-4b03-931d-8a103fd98391-kube-api-access-49r6m\") pod \"kube-proxy-rslfw\" (UID: \"39333658-6649-4b03-931d-8a103fd98391\") " pod="kube-system/kube-proxy-rslfw"
	Oct 02 21:53:56 embed-certs-132977 kubelet[1323]: I1002 21:53:56.786431    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b859d12-4b29-40f6-92a9-f8c597b013db-xtables-lock\") pod \"kindnet-p845j\" (UID: \"9b859d12-4b29-40f6-92a9-f8c597b013db\") " pod="kube-system/kindnet-p845j"
	Oct 02 21:53:56 embed-certs-132977 kubelet[1323]: I1002 21:53:56.786449    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b859d12-4b29-40f6-92a9-f8c597b013db-lib-modules\") pod \"kindnet-p845j\" (UID: \"9b859d12-4b29-40f6-92a9-f8c597b013db\") " pod="kube-system/kindnet-p845j"
	Oct 02 21:53:56 embed-certs-132977 kubelet[1323]: I1002 21:53:56.786466    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbljb\" (UniqueName: \"kubernetes.io/projected/9b859d12-4b29-40f6-92a9-f8c597b013db-kube-api-access-cbljb\") pod \"kindnet-p845j\" (UID: \"9b859d12-4b29-40f6-92a9-f8c597b013db\") " pod="kube-system/kindnet-p845j"
	Oct 02 21:53:56 embed-certs-132977 kubelet[1323]: I1002 21:53:56.991368    1323 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 21:53:58 embed-certs-132977 kubelet[1323]: I1002 21:53:58.149906    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-p845j" podStartSLOduration=2.14988945 podStartE2EDuration="2.14988945s" podCreationTimestamp="2025-10-02 21:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:53:58.118326318 +0000 UTC m=+7.525916608" watchObservedRunningTime="2025-10-02 21:53:58.14988945 +0000 UTC m=+7.557479740"
	Oct 02 21:53:58 embed-certs-132977 kubelet[1323]: I1002 21:53:58.152782    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rslfw" podStartSLOduration=2.150023805 podStartE2EDuration="2.150023805s" podCreationTimestamp="2025-10-02 21:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:53:58.149520483 +0000 UTC m=+7.557110781" watchObservedRunningTime="2025-10-02 21:53:58.150023805 +0000 UTC m=+7.557614095"
	Oct 02 21:54:37 embed-certs-132977 kubelet[1323]: I1002 21:54:37.962726    1323 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 02 21:54:38 embed-certs-132977 kubelet[1323]: I1002 21:54:38.106447    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffda283c-d4c2-4713-ae8d-b471ae5f0646-config-volume\") pod \"coredns-66bc5c9577-rl5vq\" (UID: \"ffda283c-d4c2-4713-ae8d-b471ae5f0646\") " pod="kube-system/coredns-66bc5c9577-rl5vq"
	Oct 02 21:54:38 embed-certs-132977 kubelet[1323]: I1002 21:54:38.106497    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzqqx\" (UniqueName: \"kubernetes.io/projected/ffda283c-d4c2-4713-ae8d-b471ae5f0646-kube-api-access-zzqqx\") pod \"coredns-66bc5c9577-rl5vq\" (UID: \"ffda283c-d4c2-4713-ae8d-b471ae5f0646\") " pod="kube-system/coredns-66bc5c9577-rl5vq"
	Oct 02 21:54:38 embed-certs-132977 kubelet[1323]: I1002 21:54:38.106521    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0ca82009-cfd7-4947-b4f7-2e5f033edac7-tmp\") pod \"storage-provisioner\" (UID: \"0ca82009-cfd7-4947-b4f7-2e5f033edac7\") " pod="kube-system/storage-provisioner"
	Oct 02 21:54:38 embed-certs-132977 kubelet[1323]: I1002 21:54:38.106542    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfwtb\" (UniqueName: \"kubernetes.io/projected/0ca82009-cfd7-4947-b4f7-2e5f033edac7-kube-api-access-wfwtb\") pod \"storage-provisioner\" (UID: \"0ca82009-cfd7-4947-b4f7-2e5f033edac7\") " pod="kube-system/storage-provisioner"
	Oct 02 21:54:38 embed-certs-132977 kubelet[1323]: W1002 21:54:38.347781    1323 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/crio-7a3cdd90d57f33ccb97ccabb8f2689dab5ea63bac9e2f774aee2393816396679 WatchSource:0}: Error finding container 7a3cdd90d57f33ccb97ccabb8f2689dab5ea63bac9e2f774aee2393816396679: Status 404 returned error can't find the container with id 7a3cdd90d57f33ccb97ccabb8f2689dab5ea63bac9e2f774aee2393816396679
	Oct 02 21:54:39 embed-certs-132977 kubelet[1323]: I1002 21:54:39.214378    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.214360179 podStartE2EDuration="42.214360179s" podCreationTimestamp="2025-10-02 21:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:54:39.212577794 +0000 UTC m=+48.620168100" watchObservedRunningTime="2025-10-02 21:54:39.214360179 +0000 UTC m=+48.621950477"
	Oct 02 21:54:41 embed-certs-132977 kubelet[1323]: I1002 21:54:41.378023    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rl5vq" podStartSLOduration=45.378002234 podStartE2EDuration="45.378002234s" podCreationTimestamp="2025-10-02 21:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:54:39.237151768 +0000 UTC m=+48.644742066" watchObservedRunningTime="2025-10-02 21:54:41.378002234 +0000 UTC m=+50.785592532"
	Oct 02 21:54:41 embed-certs-132977 kubelet[1323]: I1002 21:54:41.529888    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jwc5\" (UniqueName: \"kubernetes.io/projected/9f6d78c1-117c-4139-bd07-281c745fef52-kube-api-access-5jwc5\") pod \"busybox\" (UID: \"9f6d78c1-117c-4139-bd07-281c745fef52\") " pod="default/busybox"
	Oct 02 21:54:41 embed-certs-132977 kubelet[1323]: W1002 21:54:41.712108    1323 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/crio-522bf77cfc5c443909ca83ec876462a3e06529ae7f4a9ae9caad6cbe53b8758e WatchSource:0}: Error finding container 522bf77cfc5c443909ca83ec876462a3e06529ae7f4a9ae9caad6cbe53b8758e: Status 404 returned error can't find the container with id 522bf77cfc5c443909ca83ec876462a3e06529ae7f4a9ae9caad6cbe53b8758e
	
	
	==> storage-provisioner [7e0d8b3766c1cff9a45b5a20f8e42cf53b133055a47a54281e9678cfc3112b1d] <==
	I1002 21:54:38.411387       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 21:54:38.431264       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 21:54:38.431308       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 21:54:38.454417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:38.490117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:54:38.490297       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:54:38.490636       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5980ca91-fb93-47dd-a641-e89a0abe52d9", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-132977_92ce5e9f-c102-4975-a547-7a9bd223a12b became leader
	I1002 21:54:38.490762       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-132977_92ce5e9f-c102-4975-a547-7a9bd223a12b!
	W1002 21:54:38.503363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:38.509290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:54:38.591813       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-132977_92ce5e9f-c102-4975-a547-7a9bd223a12b!
	W1002 21:54:40.512425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:40.517324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:42.520728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:42.526316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:44.529060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:44.533411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:46.536611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:46.544547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:48.548261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:48.552673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:50.555909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:50.568802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:52.576629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:54:52.585023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-132977 -n embed-certs-132977
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-132977 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-132977 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-132977 --alsologtostderr -v=1: exit status 80 (2.063764519s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-132977 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:56:21.316311 1202152 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:56:21.316492 1202152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:56:21.316502 1202152 out.go:374] Setting ErrFile to fd 2...
	I1002 21:56:21.316508 1202152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:56:21.316767 1202152 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:56:21.317022 1202152 out.go:368] Setting JSON to false
	I1002 21:56:21.317048 1202152 mustload.go:65] Loading cluster: embed-certs-132977
	I1002 21:56:21.317424 1202152 config.go:182] Loaded profile config "embed-certs-132977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:56:21.317921 1202152 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:56:21.335636 1202152 host.go:66] Checking if "embed-certs-132977" exists ...
	I1002 21:56:21.335980 1202152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:56:21.400615 1202152 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 21:56:21.391229994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:56:21.401266 1202152 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-132977 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 21:56:21.406563 1202152 out.go:179] * Pausing node embed-certs-132977 ... 
	I1002 21:56:21.409521 1202152 host.go:66] Checking if "embed-certs-132977" exists ...
	I1002 21:56:21.409911 1202152 ssh_runner.go:195] Run: systemctl --version
	I1002 21:56:21.410257 1202152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:56:21.426921 1202152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:56:21.524993 1202152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:56:21.538534 1202152 pause.go:51] kubelet running: true
	I1002 21:56:21.538650 1202152 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:56:21.853409 1202152 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:56:21.853565 1202152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:56:21.919298 1202152 cri.go:89] found id: "92fddbea2a089f752704ca9f13550483ecc7b7d873d9bba8a05566b4ac64aec8"
	I1002 21:56:21.919323 1202152 cri.go:89] found id: "208585ddb52b7d92c0f5e71b6bc1c559b7735c239d80dd07f7714f9c3de4df6c"
	I1002 21:56:21.919328 1202152 cri.go:89] found id: "2c425c30abeafa4b5be915f1755bea9cf00d3431b02ee8eeec9724a007378df4"
	I1002 21:56:21.919332 1202152 cri.go:89] found id: "7e5106a143779b7fc7ed89dafef6b64d37b92e5587eb7415459d51a51a4ae669"
	I1002 21:56:21.919336 1202152 cri.go:89] found id: "1fe1e6981e154da7ba02891165e0d46656e53fec146079b96d413f11da41ddf8"
	I1002 21:56:21.919340 1202152 cri.go:89] found id: "6d58e30d958abf78f9eae1abd463fddbeff48f6b25a431b738440cb44c27d524"
	I1002 21:56:21.919343 1202152 cri.go:89] found id: "087df5d3fbc7ac3e91447e1eab8fa3241b3549986576b3ecc72ad7f333152d69"
	I1002 21:56:21.919346 1202152 cri.go:89] found id: "78533f77d44004e2358097b45d52a78adfc4483e84bf46617c6bb8b7536cf7ce"
	I1002 21:56:21.919349 1202152 cri.go:89] found id: "94bf7046df1f24c3099c069c11a0e3c6a2875cedb8d4cf611d9c9244088e5b21"
	I1002 21:56:21.919355 1202152 cri.go:89] found id: "d570b1278bbcfa5977037093dc6c615be62338db7d396256f659a6baeb204a2d"
	I1002 21:56:21.919358 1202152 cri.go:89] found id: "e50c7081a6e25a06b2903949fea5041e76da4fffc3c05ceeb25efd912768fa4a"
	I1002 21:56:21.919361 1202152 cri.go:89] found id: "c07b17f26fd253ecdc768b4dfbf3f7cead72b2d49933fdc50538afedc65fbf0a"
	I1002 21:56:21.919364 1202152 cri.go:89] found id: ""
	I1002 21:56:21.919413 1202152 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:56:21.938071 1202152 retry.go:31] will retry after 270.35517ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:56:21Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:56:22.209627 1202152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:56:22.232269 1202152 pause.go:51] kubelet running: false
	I1002 21:56:22.232356 1202152 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:56:22.432618 1202152 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:56:22.432739 1202152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:56:22.501689 1202152 cri.go:89] found id: "92fddbea2a089f752704ca9f13550483ecc7b7d873d9bba8a05566b4ac64aec8"
	I1002 21:56:22.501759 1202152 cri.go:89] found id: "208585ddb52b7d92c0f5e71b6bc1c559b7735c239d80dd07f7714f9c3de4df6c"
	I1002 21:56:22.501778 1202152 cri.go:89] found id: "2c425c30abeafa4b5be915f1755bea9cf00d3431b02ee8eeec9724a007378df4"
	I1002 21:56:22.501798 1202152 cri.go:89] found id: "7e5106a143779b7fc7ed89dafef6b64d37b92e5587eb7415459d51a51a4ae669"
	I1002 21:56:22.501819 1202152 cri.go:89] found id: "1fe1e6981e154da7ba02891165e0d46656e53fec146079b96d413f11da41ddf8"
	I1002 21:56:22.501868 1202152 cri.go:89] found id: "6d58e30d958abf78f9eae1abd463fddbeff48f6b25a431b738440cb44c27d524"
	I1002 21:56:22.501887 1202152 cri.go:89] found id: "087df5d3fbc7ac3e91447e1eab8fa3241b3549986576b3ecc72ad7f333152d69"
	I1002 21:56:22.501916 1202152 cri.go:89] found id: "78533f77d44004e2358097b45d52a78adfc4483e84bf46617c6bb8b7536cf7ce"
	I1002 21:56:22.501936 1202152 cri.go:89] found id: "94bf7046df1f24c3099c069c11a0e3c6a2875cedb8d4cf611d9c9244088e5b21"
	I1002 21:56:22.501992 1202152 cri.go:89] found id: "d570b1278bbcfa5977037093dc6c615be62338db7d396256f659a6baeb204a2d"
	I1002 21:56:22.502011 1202152 cri.go:89] found id: "e50c7081a6e25a06b2903949fea5041e76da4fffc3c05ceeb25efd912768fa4a"
	I1002 21:56:22.502081 1202152 cri.go:89] found id: "c07b17f26fd253ecdc768b4dfbf3f7cead72b2d49933fdc50538afedc65fbf0a"
	I1002 21:56:22.502100 1202152 cri.go:89] found id: ""
	I1002 21:56:22.502174 1202152 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:56:22.515916 1202152 retry.go:31] will retry after 502.352021ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:56:22Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:56:23.018550 1202152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:56:23.031653 1202152 pause.go:51] kubelet running: false
	I1002 21:56:23.031787 1202152 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:56:23.210681 1202152 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:56:23.210810 1202152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:56:23.297873 1202152 cri.go:89] found id: "92fddbea2a089f752704ca9f13550483ecc7b7d873d9bba8a05566b4ac64aec8"
	I1002 21:56:23.297937 1202152 cri.go:89] found id: "208585ddb52b7d92c0f5e71b6bc1c559b7735c239d80dd07f7714f9c3de4df6c"
	I1002 21:56:23.297956 1202152 cri.go:89] found id: "2c425c30abeafa4b5be915f1755bea9cf00d3431b02ee8eeec9724a007378df4"
	I1002 21:56:23.297986 1202152 cri.go:89] found id: "7e5106a143779b7fc7ed89dafef6b64d37b92e5587eb7415459d51a51a4ae669"
	I1002 21:56:23.298005 1202152 cri.go:89] found id: "1fe1e6981e154da7ba02891165e0d46656e53fec146079b96d413f11da41ddf8"
	I1002 21:56:23.298027 1202152 cri.go:89] found id: "6d58e30d958abf78f9eae1abd463fddbeff48f6b25a431b738440cb44c27d524"
	I1002 21:56:23.298098 1202152 cri.go:89] found id: "087df5d3fbc7ac3e91447e1eab8fa3241b3549986576b3ecc72ad7f333152d69"
	I1002 21:56:23.298117 1202152 cri.go:89] found id: "78533f77d44004e2358097b45d52a78adfc4483e84bf46617c6bb8b7536cf7ce"
	I1002 21:56:23.298130 1202152 cri.go:89] found id: "94bf7046df1f24c3099c069c11a0e3c6a2875cedb8d4cf611d9c9244088e5b21"
	I1002 21:56:23.298137 1202152 cri.go:89] found id: "d570b1278bbcfa5977037093dc6c615be62338db7d396256f659a6baeb204a2d"
	I1002 21:56:23.298141 1202152 cri.go:89] found id: "e50c7081a6e25a06b2903949fea5041e76da4fffc3c05ceeb25efd912768fa4a"
	I1002 21:56:23.298144 1202152 cri.go:89] found id: "c07b17f26fd253ecdc768b4dfbf3f7cead72b2d49933fdc50538afedc65fbf0a"
	I1002 21:56:23.298147 1202152 cri.go:89] found id: ""
	I1002 21:56:23.298199 1202152 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:56:23.312943 1202152 out.go:203] 
	W1002 21:56:23.315828 1202152 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:56:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:56:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:56:23.315850 1202152 out.go:285] * 
	* 
	W1002 21:56:23.324697 1202152 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:56:23.327673 1202152 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-132977 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-132977
helpers_test.go:243: (dbg) docker inspect embed-certs-132977:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7",
	        "Created": "2025-10-02T21:53:21.268918022Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1199084,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:55:06.698703149Z",
	            "FinishedAt": "2025-10-02T21:55:05.709183448Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/hosts",
	        "LogPath": "/var/lib/docker/containers/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7-json.log",
	        "Name": "/embed-certs-132977",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-132977:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-132977",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7",
	                "LowerDir": "/var/lib/docker/overlay2/f2e833cdde4a6810938f6c941611af7faba9c713dcad91e7a73f1622564784a1-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2e833cdde4a6810938f6c941611af7faba9c713dcad91e7a73f1622564784a1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2e833cdde4a6810938f6c941611af7faba9c713dcad91e7a73f1622564784a1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2e833cdde4a6810938f6c941611af7faba9c713dcad91e7a73f1622564784a1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-132977",
	                "Source": "/var/lib/docker/volumes/embed-certs-132977/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-132977",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-132977",
	                "name.minikube.sigs.k8s.io": "embed-certs-132977",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cffca61b61958622d1028c9d7276194684131aeff27aa7ad7416380c29204d5d",
	            "SandboxKey": "/var/run/docker/netns/cffca61b6195",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34211"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34212"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34215"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34213"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34214"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-132977": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:85:39:a5:e4:73",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "09517ef0cb9cfbc2b4218dc316c6d2b554ca0576a9445b01545284a1bf270966",
	                    "EndpointID": "777717d75dc4e647f35276aaf6254b3606b58559b36ecdffb5a4328476358ffc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-132977",
	                        "3425438903cf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-132977 -n embed-certs-132977
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-132977 -n embed-certs-132977: exit status 2 (360.005895ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-132977 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-132977 logs -n 25: (1.464955409s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-714101 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:52 UTC │
	│ delete  │ -p cert-expiration-955864                                                                                                                                                                                                                     │ cert-expiration-955864       │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:53 UTC │
	│ image   │ old-k8s-version-714101 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ pause   │ -p old-k8s-version-714101 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	│ delete  │ -p old-k8s-version-714101                                                                                                                                                                                                                     │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ delete  │ -p old-k8s-version-714101                                                                                                                                                                                                                     │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-661954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	│ stop    │ -p no-preload-661954 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ addons  │ enable dashboard -p no-preload-661954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ start   │ -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:54 UTC │
	│ image   │ no-preload-661954 image list --format=json                                                                                                                                                                                                    │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ pause   │ -p no-preload-661954 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-132977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	│ delete  │ -p no-preload-661954                                                                                                                                                                                                                          │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ stop    │ -p embed-certs-132977 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:55 UTC │
	│ delete  │ -p no-preload-661954                                                                                                                                                                                                                          │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ delete  │ -p disable-driver-mounts-013352                                                                                                                                                                                                               │ disable-driver-mounts-013352 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ start   │ -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-132977 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:55 UTC │
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:56 UTC │
	│ image   │ embed-certs-132977 image list --format=json                                                                                                                                                                                                   │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ pause   │ -p embed-certs-132977 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:55:06
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:55:06.318597 1198906 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:55:06.318746 1198906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:55:06.318770 1198906 out.go:374] Setting ErrFile to fd 2...
	I1002 21:55:06.318776 1198906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:55:06.319053 1198906 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:55:06.319517 1198906 out.go:368] Setting JSON to false
	I1002 21:55:06.320459 1198906 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23844,"bootTime":1759418263,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:55:06.320530 1198906 start.go:140] virtualization:  
	I1002 21:55:06.324044 1198906 out.go:179] * [embed-certs-132977] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:55:06.327161 1198906 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:55:06.327268 1198906 notify.go:221] Checking for updates...
	I1002 21:55:06.333179 1198906 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:55:06.336158 1198906 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:55:06.339260 1198906 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:55:06.342223 1198906 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:55:06.345201 1198906 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:55:06.349486 1198906 config.go:182] Loaded profile config "embed-certs-132977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:55:06.350148 1198906 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:55:06.389441 1198906 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:55:06.389566 1198906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:55:06.487282 1198906 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:55:06.474837196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:55:06.487407 1198906 docker.go:319] overlay module found
	I1002 21:55:06.490531 1198906 out.go:179] * Using the docker driver based on existing profile
	I1002 21:55:06.493436 1198906 start.go:306] selected driver: docker
	I1002 21:55:06.493472 1198906 start.go:936] validating driver "docker" against &{Name:embed-certs-132977 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-132977 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:55:06.493576 1198906 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:55:06.494434 1198906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:55:06.598451 1198906 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:55:06.58294128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:55:06.598826 1198906 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:55:06.598876 1198906 cni.go:84] Creating CNI manager for ""
	I1002 21:55:06.598943 1198906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:55:06.598995 1198906 start.go:350] cluster config:
	{Name:embed-certs-132977 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-132977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:55:06.602618 1198906 out.go:179] * Starting "embed-certs-132977" primary control-plane node in "embed-certs-132977" cluster
	I1002 21:55:06.605491 1198906 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:55:06.608521 1198906 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:55:06.611457 1198906 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:55:06.611533 1198906 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:55:06.611554 1198906 cache.go:59] Caching tarball of preloaded images
	I1002 21:55:06.611654 1198906 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:55:06.611708 1198906 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:55:06.611845 1198906 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/config.json ...
	I1002 21:55:06.612110 1198906 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:55:06.639569 1198906 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:55:06.639595 1198906 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:55:06.639609 1198906 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:55:06.639636 1198906 start.go:361] acquireMachinesLock for embed-certs-132977: {Name:mkeaddb5abf9563079c0434ecbd0586026902019 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:55:06.639703 1198906 start.go:365] duration metric: took 43.174µs to acquireMachinesLock for "embed-certs-132977"
	I1002 21:55:06.639726 1198906 start.go:97] Skipping create...Using existing machine configuration
	I1002 21:55:06.639737 1198906 fix.go:55] fixHost starting: 
	I1002 21:55:06.639997 1198906 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:55:06.659606 1198906 fix.go:113] recreateIfNeeded on embed-certs-132977: state=Stopped err=<nil>
	W1002 21:55:06.659639 1198906 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 21:55:02.328599 1197405 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-842185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.576656851s)
	I1002 21:55:02.328651 1197405 kic.go:203] duration metric: took 4.576833699s to extract preloaded images to volume ...
	W1002 21:55:02.328798 1197405 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:55:02.328906 1197405 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:55:02.395196 1197405 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-842185 --name default-k8s-diff-port-842185 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-842185 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-842185 --network default-k8s-diff-port-842185 --ip 192.168.85.2 --volume default-k8s-diff-port-842185:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:55:02.716817 1197405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Running}}
	I1002 21:55:02.735757 1197405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:55:02.761461 1197405 cli_runner.go:164] Run: docker exec default-k8s-diff-port-842185 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:55:02.814777 1197405 oci.go:144] the created container "default-k8s-diff-port-842185" has a running status.
	I1002 21:55:02.814808 1197405 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa...
	I1002 21:55:03.649322 1197405 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:55:03.668261 1197405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:55:03.687221 1197405 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:55:03.687246 1197405 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-842185 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:55:03.734694 1197405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:55:03.759868 1197405 machine.go:93] provisionDockerMachine start ...
	I1002 21:55:03.759978 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:03.785297 1197405 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:03.785637 1197405 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34206 <nil> <nil>}
	I1002 21:55:03.785648 1197405 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:55:03.925766 1197405 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-842185
	
	I1002 21:55:03.925788 1197405 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-842185"
	I1002 21:55:03.925860 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:03.949520 1197405 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:03.949830 1197405 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34206 <nil> <nil>}
	I1002 21:55:03.949843 1197405 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-842185 && echo "default-k8s-diff-port-842185" | sudo tee /etc/hostname
	I1002 21:55:04.096124 1197405 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-842185
	
	I1002 21:55:04.096209 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:04.114266 1197405 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:04.114573 1197405 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34206 <nil> <nil>}
	I1002 21:55:04.114597 1197405 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-842185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-842185/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-842185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:55:04.246418 1197405 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:55:04.246448 1197405 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:55:04.246468 1197405 ubuntu.go:190] setting up certificates
	I1002 21:55:04.246490 1197405 provision.go:84] configureAuth start
	I1002 21:55:04.246555 1197405 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-842185
	I1002 21:55:04.266152 1197405 provision.go:143] copyHostCerts
	I1002 21:55:04.266225 1197405 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:55:04.266242 1197405 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:55:04.266319 1197405 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:55:04.266419 1197405 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:55:04.266428 1197405 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:55:04.266453 1197405 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:55:04.266512 1197405 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:55:04.266520 1197405 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:55:04.266543 1197405 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:55:04.266600 1197405 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-842185 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-842185 localhost minikube]
	I1002 21:55:05.105871 1197405 provision.go:177] copyRemoteCerts
	I1002 21:55:05.105950 1197405 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:55:05.105995 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:05.125246 1197405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34206 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:55:05.221690 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:55:05.238844 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1002 21:55:05.256616 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:55:05.274738 1197405 provision.go:87] duration metric: took 1.028222796s to configureAuth
	I1002 21:55:05.274808 1197405 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:55:05.275024 1197405 config.go:182] Loaded profile config "default-k8s-diff-port-842185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:55:05.275176 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:05.291963 1197405 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:05.292277 1197405 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34206 <nil> <nil>}
	I1002 21:55:05.292298 1197405 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:55:05.554362 1197405 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:55:05.554385 1197405 machine.go:96] duration metric: took 1.794497187s to provisionDockerMachine
	I1002 21:55:05.554395 1197405 client.go:171] duration metric: took 8.492260211s to LocalClient.Create
	I1002 21:55:05.554408 1197405 start.go:168] duration metric: took 8.492375457s to libmachine.API.Create "default-k8s-diff-port-842185"
	I1002 21:55:05.554416 1197405 start.go:294] postStartSetup for "default-k8s-diff-port-842185" (driver="docker")
	I1002 21:55:05.554426 1197405 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:55:05.554490 1197405 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:55:05.554529 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:05.573186 1197405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34206 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:55:05.670327 1197405 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:55:05.673888 1197405 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:55:05.673936 1197405 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:55:05.673947 1197405 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:55:05.674008 1197405 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:55:05.674129 1197405 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:55:05.674236 1197405 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:55:05.681859 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:55:05.709765 1197405 start.go:297] duration metric: took 155.334317ms for postStartSetup
	I1002 21:55:05.710193 1197405 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-842185
	I1002 21:55:05.729568 1197405 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/config.json ...
	I1002 21:55:05.729847 1197405 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:55:05.729890 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:05.750665 1197405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34206 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:55:05.851879 1197405 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:55:05.859633 1197405 start.go:129] duration metric: took 8.801318523s to createHost
	I1002 21:55:05.859662 1197405 start.go:84] releasing machines lock for "default-k8s-diff-port-842185", held for 8.801449818s
	I1002 21:55:05.859748 1197405 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-842185
	I1002 21:55:05.878762 1197405 ssh_runner.go:195] Run: cat /version.json
	I1002 21:55:05.878822 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:05.879125 1197405 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:55:05.879188 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:05.920476 1197405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34206 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:55:05.935330 1197405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34206 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:55:06.034117 1197405 ssh_runner.go:195] Run: systemctl --version
	I1002 21:55:06.189025 1197405 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:55:06.236904 1197405 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:55:06.243484 1197405 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:55:06.243558 1197405 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:55:06.281278 1197405 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 21:55:06.281308 1197405 start.go:496] detecting cgroup driver to use...
	I1002 21:55:06.281349 1197405 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:55:06.281407 1197405 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:55:06.303190 1197405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:55:06.317521 1197405 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:55:06.317582 1197405 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:55:06.338220 1197405 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:55:06.362756 1197405 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:55:06.531468 1197405 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:55:06.691051 1197405 docker.go:234] disabling docker service ...
	I1002 21:55:06.691133 1197405 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:55:06.720127 1197405 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:55:06.736439 1197405 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:55:06.916669 1197405 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:55:07.113784 1197405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:55:07.130251 1197405 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:55:07.145894 1197405 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:55:07.145965 1197405 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:07.157116 1197405 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:55:07.157195 1197405 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:07.168286 1197405 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:07.178703 1197405 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:07.189262 1197405 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:55:07.198658 1197405 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:07.208390 1197405 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:07.224026 1197405 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:07.235663 1197405 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:55:07.247713 1197405 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:55:07.258622 1197405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:55:07.429782 1197405 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:55:07.634020 1197405 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:55:07.634174 1197405 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:55:07.643169 1197405 start.go:564] Will wait 60s for crictl version
	I1002 21:55:07.643288 1197405 ssh_runner.go:195] Run: which crictl
	I1002 21:55:07.647630 1197405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:55:07.702342 1197405 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:55:07.702497 1197405 ssh_runner.go:195] Run: crio --version
	I1002 21:55:07.737472 1197405 ssh_runner.go:195] Run: crio --version
	I1002 21:55:07.772373 1197405 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:55:06.662971 1198906 out.go:252] * Restarting existing docker container for "embed-certs-132977" ...
	I1002 21:55:06.663075 1198906 cli_runner.go:164] Run: docker start embed-certs-132977
	I1002 21:55:06.993248 1198906 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:55:07.020341 1198906 kic.go:430] container "embed-certs-132977" state is running.
	I1002 21:55:07.020723 1198906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132977
	I1002 21:55:07.052377 1198906 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/config.json ...
	I1002 21:55:07.052602 1198906 machine.go:93] provisionDockerMachine start ...
	I1002 21:55:07.052662 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:07.094804 1198906 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:07.095116 1198906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34211 <nil> <nil>}
	I1002 21:55:07.095126 1198906 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:55:07.095818 1198906 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43124->127.0.0.1:34211: read: connection reset by peer
	I1002 21:55:10.258322 1198906 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-132977
	
	I1002 21:55:10.258406 1198906 ubuntu.go:182] provisioning hostname "embed-certs-132977"
	I1002 21:55:10.258515 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:10.285880 1198906 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:10.286263 1198906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34211 <nil> <nil>}
	I1002 21:55:10.286276 1198906 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-132977 && echo "embed-certs-132977" | sudo tee /etc/hostname
	I1002 21:55:10.438939 1198906 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-132977
	
	I1002 21:55:10.439065 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:10.471759 1198906 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:10.472097 1198906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34211 <nil> <nil>}
	I1002 21:55:10.472125 1198906 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-132977' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-132977/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-132977' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:55:10.626842 1198906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:55:10.626873 1198906 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:55:10.626902 1198906 ubuntu.go:190] setting up certificates
	I1002 21:55:10.626914 1198906 provision.go:84] configureAuth start
	I1002 21:55:10.626989 1198906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132977
	I1002 21:55:10.655798 1198906 provision.go:143] copyHostCerts
	I1002 21:55:10.655870 1198906 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:55:10.655892 1198906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:55:10.655982 1198906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:55:10.656095 1198906 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:55:10.656107 1198906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:55:10.656136 1198906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:55:10.656209 1198906 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:55:10.656219 1198906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:55:10.656245 1198906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:55:10.656297 1198906 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.embed-certs-132977 san=[127.0.0.1 192.168.76.2 embed-certs-132977 localhost minikube]
	I1002 21:55:11.117688 1198906 provision.go:177] copyRemoteCerts
	I1002 21:55:11.117760 1198906 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:55:11.117814 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:11.139934 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:11.243183 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:55:11.275985 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 21:55:11.298701 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:55:11.319666 1198906 provision.go:87] duration metric: took 692.733042ms to configureAuth
	I1002 21:55:11.319704 1198906 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:55:11.319905 1198906 config.go:182] Loaded profile config "embed-certs-132977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:55:11.320023 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:07.775336 1197405 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-842185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:55:07.791155 1197405 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 21:55:07.794986 1197405 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:55:07.804388 1197405 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-842185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842185 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:55:07.804502 1197405 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:55:07.804562 1197405 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:55:07.840843 1197405 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:55:07.840864 1197405 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:55:07.840918 1197405 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:55:07.866413 1197405 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:55:07.866435 1197405 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:55:07.866444 1197405 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1002 21:55:07.866528 1197405 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-842185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:55:07.866618 1197405 ssh_runner.go:195] Run: crio config
	I1002 21:55:07.919073 1197405 cni.go:84] Creating CNI manager for ""
	I1002 21:55:07.919105 1197405 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:55:07.919123 1197405 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:55:07.919146 1197405 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-842185 NodeName:default-k8s-diff-port-842185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:55:07.919298 1197405 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-842185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:55:07.919383 1197405 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:55:07.927156 1197405 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:55:07.927226 1197405 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:55:07.934749 1197405 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 21:55:07.947488 1197405 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:55:07.961243 1197405 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1002 21:55:07.974153 1197405 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:55:07.977659 1197405 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:55:07.987749 1197405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:55:08.100015 1197405 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:55:08.116257 1197405 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185 for IP: 192.168.85.2
	I1002 21:55:08.116280 1197405 certs.go:195] generating shared ca certs ...
	I1002 21:55:08.116296 1197405 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:08.116462 1197405 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:55:08.116530 1197405 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:55:08.116544 1197405 certs.go:257] generating profile certs ...
	I1002 21:55:08.116616 1197405 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.key
	I1002 21:55:08.116632 1197405 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.crt with IP's: []
	I1002 21:55:08.361821 1197405 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.crt ...
	I1002 21:55:08.361853 1197405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.crt: {Name:mk7432a8cdd18e2212383ff74a6157cd921bad72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:08.362092 1197405 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.key ...
	I1002 21:55:08.362111 1197405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.key: {Name:mk20a80d1291d28730b7eb8d1820c44a8dbd0bcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:08.362212 1197405 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.key.af0db507
	I1002 21:55:08.362231 1197405 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.crt.af0db507 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 21:55:08.663082 1197405 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.crt.af0db507 ...
	I1002 21:55:08.663117 1197405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.crt.af0db507: {Name:mk6ce5138fe023c7635cb852f6a370bcc28b6ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:08.663311 1197405 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.key.af0db507 ...
	I1002 21:55:08.663325 1197405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.key.af0db507: {Name:mk11039677f0c54458819516a143ebfda060be95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:08.663408 1197405 certs.go:382] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.crt.af0db507 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.crt
	I1002 21:55:08.663489 1197405 certs.go:386] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.key.af0db507 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.key
	I1002 21:55:08.663549 1197405 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.key
	I1002 21:55:08.663566 1197405 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.crt with IP's: []
	I1002 21:55:08.737594 1197405 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.crt ...
	I1002 21:55:08.737624 1197405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.crt: {Name:mk5008636127be678694cc7cb9c5fdf9c4d19c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:08.737796 1197405 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.key ...
	I1002 21:55:08.737811 1197405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.key: {Name:mk3e8ecafadac5c1d58c5a25f1ef491fb5dac0cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:08.737990 1197405 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:55:08.738057 1197405 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:55:08.738072 1197405 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:55:08.738106 1197405 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:55:08.738142 1197405 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:55:08.738176 1197405 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:55:08.738224 1197405 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:55:08.738931 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:55:08.756289 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:55:08.775158 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:55:08.793859 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:55:08.812278 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 21:55:08.829951 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:55:08.848279 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:55:08.865669 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:55:08.883600 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:55:08.901000 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:55:08.918791 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:55:08.936381 1197405 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:55:08.949603 1197405 ssh_runner.go:195] Run: openssl version
	I1002 21:55:08.956901 1197405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:55:08.969520 1197405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:55:08.974087 1197405 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:55:08.974208 1197405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:55:09.016312 1197405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:55:09.026917 1197405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:55:09.035157 1197405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:55:09.038886 1197405 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:55:09.038999 1197405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:55:09.081013 1197405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:55:09.089494 1197405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:55:09.097809 1197405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:55:09.101428 1197405 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:55:09.101536 1197405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:55:09.142642 1197405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:55:09.151830 1197405 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:55:09.155435 1197405 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:55:09.155540 1197405 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-842185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842185 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:55:09.155628 1197405 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:55:09.155696 1197405 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:55:09.183846 1197405 cri.go:89] found id: ""
	I1002 21:55:09.183943 1197405 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:55:09.191863 1197405 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:55:09.199727 1197405 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:55:09.199824 1197405 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:55:09.207519 1197405 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:55:09.207555 1197405 kubeadm.go:157] found existing configuration files:
	
	I1002 21:55:09.207638 1197405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1002 21:55:09.215463 1197405 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:55:09.215575 1197405 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:55:09.223251 1197405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1002 21:55:09.231144 1197405 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:55:09.231219 1197405 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:55:09.238724 1197405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1002 21:55:09.247223 1197405 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:55:09.247340 1197405 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:55:09.255409 1197405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1002 21:55:09.262921 1197405 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:55:09.263008 1197405 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:55:09.270667 1197405 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:55:09.336934 1197405 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:55:09.337186 1197405 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:55:09.403361 1197405 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:55:11.341937 1198906 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:11.342291 1198906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34211 <nil> <nil>}
	I1002 21:55:11.342313 1198906 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:55:11.681028 1198906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:55:11.681102 1198906 machine.go:96] duration metric: took 4.628489792s to provisionDockerMachine
	I1002 21:55:11.681142 1198906 start.go:294] postStartSetup for "embed-certs-132977" (driver="docker")
	I1002 21:55:11.681185 1198906 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:55:11.681290 1198906 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:55:11.681372 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:11.716730 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:11.826850 1198906 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:55:11.830899 1198906 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:55:11.830929 1198906 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:55:11.830940 1198906 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:55:11.830991 1198906 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:55:11.831067 1198906 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:55:11.831176 1198906 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:55:11.838769 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:55:11.856558 1198906 start.go:297] duration metric: took 175.369983ms for postStartSetup
	I1002 21:55:11.856728 1198906 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:55:11.856813 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:11.876422 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:11.975607 1198906 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:55:11.985181 1198906 fix.go:57] duration metric: took 5.345437121s for fixHost
	I1002 21:55:11.985209 1198906 start.go:84] releasing machines lock for "embed-certs-132977", held for 5.345493481s
	I1002 21:55:11.985282 1198906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132977
	I1002 21:55:12.012587 1198906 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:55:12.012655 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:12.012818 1198906 ssh_runner.go:195] Run: cat /version.json
	I1002 21:55:12.012879 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:12.057509 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:12.067956 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:12.174667 1198906 ssh_runner.go:195] Run: systemctl --version
	I1002 21:55:12.276530 1198906 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:55:12.329980 1198906 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:55:12.336909 1198906 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:55:12.336994 1198906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:55:12.350677 1198906 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:55:12.350719 1198906 start.go:496] detecting cgroup driver to use...
	I1002 21:55:12.350753 1198906 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:55:12.350811 1198906 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:55:12.373853 1198906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:55:12.392788 1198906 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:55:12.392887 1198906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:55:12.414999 1198906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:55:12.434368 1198906 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:55:12.604479 1198906 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:55:12.776957 1198906 docker.go:234] disabling docker service ...
	I1002 21:55:12.777047 1198906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:55:12.801041 1198906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:55:12.816397 1198906 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:55:12.961743 1198906 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:55:13.104311 1198906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:55:13.118908 1198906 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:55:13.133309 1198906 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:55:13.133424 1198906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:13.142133 1198906 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:55:13.142251 1198906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:13.152243 1198906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:13.161789 1198906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:13.171525 1198906 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:55:13.180798 1198906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:13.190697 1198906 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:13.200424 1198906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:13.212122 1198906 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:55:13.221495 1198906 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:55:13.238822 1198906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:55:13.400531 1198906 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:55:13.581247 1198906 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:55:13.581375 1198906 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:55:13.587114 1198906 start.go:564] Will wait 60s for crictl version
	I1002 21:55:13.587191 1198906 ssh_runner.go:195] Run: which crictl
	I1002 21:55:13.591557 1198906 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:55:13.636508 1198906 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:55:13.636606 1198906 ssh_runner.go:195] Run: crio --version
	I1002 21:55:13.668599 1198906 ssh_runner.go:195] Run: crio --version
	I1002 21:55:13.705619 1198906 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:55:13.708531 1198906 cli_runner.go:164] Run: docker network inspect embed-certs-132977 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:55:13.740416 1198906 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 21:55:13.744641 1198906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:55:13.769366 1198906 kubeadm.go:883] updating cluster {Name:embed-certs-132977 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-132977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:55:13.769479 1198906 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:55:13.769542 1198906 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:55:13.826511 1198906 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:55:13.826532 1198906 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:55:13.826594 1198906 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:55:13.854095 1198906 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:55:13.854115 1198906 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:55:13.854123 1198906 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 21:55:13.854224 1198906 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-132977 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-132977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:55:13.854302 1198906 ssh_runner.go:195] Run: crio config
	I1002 21:55:13.924225 1198906 cni.go:84] Creating CNI manager for ""
	I1002 21:55:13.924284 1198906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:55:13.924313 1198906 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:55:13.924364 1198906 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-132977 NodeName:embed-certs-132977 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:55:13.924518 1198906 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-132977"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:55:13.924603 1198906 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:55:13.932353 1198906 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:55:13.932470 1198906 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:55:13.942818 1198906 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1002 21:55:13.955958 1198906 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:55:13.972176 1198906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1002 21:55:13.985855 1198906 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:55:13.990074 1198906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:55:14.000764 1198906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:55:14.144471 1198906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:55:14.163458 1198906 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977 for IP: 192.168.76.2
	I1002 21:55:14.163485 1198906 certs.go:195] generating shared ca certs ...
	I1002 21:55:14.163505 1198906 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:14.163657 1198906 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:55:14.163721 1198906 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:55:14.163736 1198906 certs.go:257] generating profile certs ...
	I1002 21:55:14.163822 1198906 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/client.key
	I1002 21:55:14.163893 1198906 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/apiserver.key.8d55cb16
	I1002 21:55:14.163939 1198906 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/proxy-client.key
	I1002 21:55:14.164056 1198906 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:55:14.164090 1198906 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:55:14.164103 1198906 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:55:14.164128 1198906 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:55:14.164154 1198906 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:55:14.164179 1198906 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:55:14.164225 1198906 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:55:14.164778 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:55:14.186492 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:55:14.207752 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:55:14.227486 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:55:14.274551 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1002 21:55:14.303019 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:55:14.332219 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:55:14.364720 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 21:55:14.411902 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:55:14.436603 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:55:14.491255 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:55:14.535641 1198906 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:55:14.563295 1198906 ssh_runner.go:195] Run: openssl version
	I1002 21:55:14.582545 1198906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:55:14.598848 1198906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:55:14.608864 1198906 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:55:14.608974 1198906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:55:14.655282 1198906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:55:14.663576 1198906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:55:14.672022 1198906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:55:14.676396 1198906 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:55:14.676508 1198906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:55:14.732433 1198906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:55:14.741548 1198906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:55:14.749725 1198906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:55:14.753847 1198906 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:55:14.753988 1198906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:55:14.795673 1198906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:55:14.803945 1198906 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:55:14.808159 1198906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:55:14.852024 1198906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:55:14.896394 1198906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:55:14.939517 1198906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:55:14.981303 1198906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:55:15.057785 1198906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:55:15.149722 1198906 kubeadm.go:400] StartCluster: {Name:embed-certs-132977 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-132977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:55:15.149861 1198906 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:55:15.149971 1198906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:55:15.240480 1198906 cri.go:89] found id: ""
	I1002 21:55:15.240631 1198906 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:55:15.253260 1198906 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:55:15.253337 1198906 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:55:15.253430 1198906 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:55:15.290748 1198906 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:55:15.291217 1198906 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-132977" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:55:15.291390 1198906 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-992084/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-132977" cluster setting kubeconfig missing "embed-certs-132977" context setting]
	I1002 21:55:15.291713 1198906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:15.293206 1198906 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:55:15.323404 1198906 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 21:55:15.323478 1198906 kubeadm.go:601] duration metric: took 70.112584ms to restartPrimaryControlPlane
	I1002 21:55:15.323502 1198906 kubeadm.go:402] duration metric: took 173.789446ms to StartCluster
	I1002 21:55:15.323544 1198906 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:15.323621 1198906 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:55:15.324667 1198906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:15.324924 1198906 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:55:15.325246 1198906 config.go:182] Loaded profile config "embed-certs-132977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:55:15.325319 1198906 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:55:15.325519 1198906 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-132977"
	I1002 21:55:15.325550 1198906 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-132977"
	W1002 21:55:15.325614 1198906 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:55:15.325654 1198906 host.go:66] Checking if "embed-certs-132977" exists ...
	I1002 21:55:15.325585 1198906 addons.go:69] Setting dashboard=true in profile "embed-certs-132977"
	I1002 21:55:15.325767 1198906 addons.go:238] Setting addon dashboard=true in "embed-certs-132977"
	W1002 21:55:15.325774 1198906 addons.go:247] addon dashboard should already be in state true
	I1002 21:55:15.325846 1198906 host.go:66] Checking if "embed-certs-132977" exists ...
	I1002 21:55:15.326443 1198906 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:55:15.325596 1198906 addons.go:69] Setting default-storageclass=true in profile "embed-certs-132977"
	I1002 21:55:15.326952 1198906 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-132977"
	I1002 21:55:15.327242 1198906 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:55:15.327600 1198906 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:55:15.330108 1198906 out.go:179] * Verifying Kubernetes components...
	I1002 21:55:15.333475 1198906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:55:15.378615 1198906 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:55:15.384876 1198906 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:55:15.384901 1198906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:55:15.384967 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:15.390302 1198906 addons.go:238] Setting addon default-storageclass=true in "embed-certs-132977"
	W1002 21:55:15.390325 1198906 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:55:15.390349 1198906 host.go:66] Checking if "embed-certs-132977" exists ...
	I1002 21:55:15.390769 1198906 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:55:15.398105 1198906 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 21:55:15.402190 1198906 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 21:55:15.405678 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 21:55:15.405708 1198906 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 21:55:15.405772 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:15.438271 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:15.438426 1198906 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:55:15.438441 1198906 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:55:15.438497 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:15.460247 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:15.483944 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:15.790316 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 21:55:15.790342 1198906 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 21:55:15.859986 1198906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:55:15.864526 1198906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:55:15.875582 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 21:55:15.875605 1198906 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 21:55:15.939188 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 21:55:15.939212 1198906 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 21:55:16.014804 1198906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:55:16.100539 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 21:55:16.100610 1198906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 21:55:16.235619 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 21:55:16.235640 1198906 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 21:55:16.327895 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 21:55:16.327919 1198906 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 21:55:16.400544 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 21:55:16.400567 1198906 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 21:55:16.451504 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 21:55:16.451529 1198906 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 21:55:16.484423 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:55:16.484449 1198906 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 21:55:16.534636 1198906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:55:24.015358 1198906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.15533452s)
	I1002 21:55:25.587198 1198906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.722637541s)
	I1002 21:55:25.587277 1198906 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.572402454s)
	I1002 21:55:25.587314 1198906 node_ready.go:35] waiting up to 6m0s for node "embed-certs-132977" to be "Ready" ...
	I1002 21:55:25.729388 1198906 node_ready.go:49] node "embed-certs-132977" is "Ready"
	I1002 21:55:25.729416 1198906 node_ready.go:38] duration metric: took 142.084984ms for node "embed-certs-132977" to be "Ready" ...
	I1002 21:55:25.729432 1198906 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:55:25.729489 1198906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:55:26.371937 1198906 api_server.go:72] duration metric: took 11.046955412s to wait for apiserver process to appear ...
	I1002 21:55:26.371963 1198906 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:55:26.371982 1198906 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:55:26.372883 1198906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.838207614s)
	I1002 21:55:26.387501 1198906 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:55:26.390280 1198906 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:55:26.401151 1198906 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-132977 addons enable metrics-server
	
	I1002 21:55:26.404602 1198906 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1002 21:55:26.406902 1198906 addons.go:514] duration metric: took 11.081554995s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1002 21:55:26.872953 1198906 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:55:26.895743 1198906 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 21:55:26.897417 1198906 api_server.go:141] control plane version: v1.34.1
	I1002 21:55:26.897444 1198906 api_server.go:131] duration metric: took 525.473495ms to wait for apiserver health ...
	I1002 21:55:26.897454 1198906 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:55:26.911517 1198906 system_pods.go:59] 8 kube-system pods found
	I1002 21:55:26.911556 1198906 system_pods.go:61] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:55:26.911584 1198906 system_pods.go:61] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:55:26.911601 1198906 system_pods.go:61] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 21:55:26.911610 1198906 system_pods.go:61] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:55:26.911630 1198906 system_pods.go:61] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:55:26.911636 1198906 system_pods.go:61] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:55:26.911659 1198906 system_pods.go:61] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:55:26.911668 1198906 system_pods.go:61] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Running
	I1002 21:55:26.911686 1198906 system_pods.go:74] duration metric: took 14.207552ms to wait for pod list to return data ...
	I1002 21:55:26.911701 1198906 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:55:26.935857 1198906 default_sa.go:45] found service account: "default"
	I1002 21:55:26.935895 1198906 default_sa.go:55] duration metric: took 24.186982ms for default service account to be created ...
	I1002 21:55:26.935905 1198906 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:55:26.949401 1198906 system_pods.go:86] 8 kube-system pods found
	I1002 21:55:26.949438 1198906 system_pods.go:89] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:55:26.949448 1198906 system_pods.go:89] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:55:26.949474 1198906 system_pods.go:89] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:55:26.949488 1198906 system_pods.go:89] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:55:26.949496 1198906 system_pods.go:89] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:55:26.949505 1198906 system_pods.go:89] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:55:26.949513 1198906 system_pods.go:89] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:55:26.949523 1198906 system_pods.go:89] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Running
	I1002 21:55:26.949531 1198906 system_pods.go:126] duration metric: took 13.602088ms to wait for k8s-apps to be running ...
	I1002 21:55:26.949557 1198906 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:55:26.949629 1198906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:55:26.980434 1198906 system_svc.go:56] duration metric: took 30.868045ms WaitForService to wait for kubelet
	I1002 21:55:26.980514 1198906 kubeadm.go:586] duration metric: took 11.65553571s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:55:26.980549 1198906 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:55:26.997172 1198906 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:55:26.997253 1198906 node_conditions.go:123] node cpu capacity is 2
	I1002 21:55:26.997281 1198906 node_conditions.go:105] duration metric: took 16.712119ms to run NodePressure ...
	I1002 21:55:26.997311 1198906 start.go:242] waiting for startup goroutines ...
	I1002 21:55:26.997352 1198906 start.go:247] waiting for cluster config update ...
	I1002 21:55:26.997378 1198906 start.go:256] writing updated cluster config ...
	I1002 21:55:26.997737 1198906 ssh_runner.go:195] Run: rm -f paused
	I1002 21:55:27.003915 1198906 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:55:27.023438 1198906 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rl5vq" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 21:55:29.060193 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	I1002 21:55:35.979466 1197405 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:55:35.979524 1197405 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:55:35.979615 1197405 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:55:35.979685 1197405 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:55:35.979723 1197405 kubeadm.go:318] OS: Linux
	I1002 21:55:35.979770 1197405 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:55:35.979820 1197405 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:55:35.979869 1197405 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:55:35.979919 1197405 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:55:35.979969 1197405 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:55:35.980022 1197405 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:55:35.980069 1197405 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:55:35.980119 1197405 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:55:35.980167 1197405 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:55:35.980241 1197405 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:55:35.980351 1197405 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:55:35.980444 1197405 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:55:35.980509 1197405 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:55:35.983998 1197405 out.go:252]   - Generating certificates and keys ...
	I1002 21:55:35.984092 1197405 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:55:35.984159 1197405 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:55:35.984230 1197405 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:55:35.984289 1197405 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:55:35.984353 1197405 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:55:35.984406 1197405 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:55:35.984469 1197405 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:55:35.984616 1197405 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-842185 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 21:55:35.984673 1197405 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:55:35.984808 1197405 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-842185 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 21:55:35.984882 1197405 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:55:35.984949 1197405 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:55:35.984995 1197405 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:55:35.985062 1197405 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:55:35.985117 1197405 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:55:35.985176 1197405 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:55:35.985234 1197405 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:55:35.985300 1197405 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:55:35.985357 1197405 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:55:35.985442 1197405 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:55:35.985512 1197405 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:55:35.988646 1197405 out.go:252]   - Booting up control plane ...
	I1002 21:55:35.988826 1197405 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:55:35.988958 1197405 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:55:35.989085 1197405 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:55:35.989209 1197405 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:55:35.989313 1197405 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:55:35.989428 1197405 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:55:35.989521 1197405 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:55:35.989565 1197405 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:55:35.989707 1197405 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:55:35.989829 1197405 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:55:35.989895 1197405 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 4.001041271s
	I1002 21:55:35.989997 1197405 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:55:35.990114 1197405 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1002 21:55:35.990214 1197405 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:55:35.990301 1197405 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:55:35.990385 1197405 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.369986048s
	I1002 21:55:35.990459 1197405 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.368376906s
	I1002 21:55:35.990534 1197405 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.502724799s
	I1002 21:55:35.990652 1197405 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:55:35.990790 1197405 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:55:35.990869 1197405 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:55:35.991092 1197405 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-842185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 21:55:35.991155 1197405 kubeadm.go:318] [bootstrap-token] Using token: l5i99u.q2o89w4rszqt38dw
	I1002 21:55:35.994303 1197405 out.go:252]   - Configuring RBAC rules ...
	I1002 21:55:35.994466 1197405 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:55:35.994607 1197405 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:55:35.994821 1197405 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:55:35.994967 1197405 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:55:35.995097 1197405 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:55:35.995193 1197405 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:55:35.995321 1197405 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:55:35.995370 1197405 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 21:55:35.995421 1197405 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 21:55:35.995425 1197405 kubeadm.go:318] 
	I1002 21:55:35.995492 1197405 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 21:55:35.995497 1197405 kubeadm.go:318] 
	I1002 21:55:35.995583 1197405 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 21:55:35.995587 1197405 kubeadm.go:318] 
	I1002 21:55:35.995616 1197405 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 21:55:35.995690 1197405 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:55:35.995753 1197405 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:55:35.995758 1197405 kubeadm.go:318] 
	I1002 21:55:35.995824 1197405 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 21:55:35.995828 1197405 kubeadm.go:318] 
	I1002 21:55:35.995881 1197405 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 21:55:35.995885 1197405 kubeadm.go:318] 
	I1002 21:55:35.995943 1197405 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 21:55:35.996029 1197405 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:55:35.996106 1197405 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:55:35.996110 1197405 kubeadm.go:318] 
	I1002 21:55:35.996204 1197405 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:55:35.996290 1197405 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 21:55:35.996294 1197405 kubeadm.go:318] 
	I1002 21:55:35.996388 1197405 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token l5i99u.q2o89w4rszqt38dw \
	I1002 21:55:35.996503 1197405 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 \
	I1002 21:55:35.996526 1197405 kubeadm.go:318] 	--control-plane 
	I1002 21:55:35.996530 1197405 kubeadm.go:318] 
	I1002 21:55:35.996625 1197405 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:55:35.996629 1197405 kubeadm.go:318] 
	I1002 21:55:35.996721 1197405 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token l5i99u.q2o89w4rszqt38dw \
	I1002 21:55:35.996847 1197405 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 
	I1002 21:55:35.996855 1197405 cni.go:84] Creating CNI manager for ""
	I1002 21:55:35.996863 1197405 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:55:36.000175 1197405 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1002 21:55:31.529837 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:33.530446 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:35.533070 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	I1002 21:55:36.003982 1197405 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:55:36.014108 1197405 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 21:55:36.014182 1197405 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 21:55:36.045979 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:55:36.474882 1197405 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:55:36.475041 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:36.475220 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-842185 minikube.k8s.io/updated_at=2025_10_02T21_55_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b minikube.k8s.io/name=default-k8s-diff-port-842185 minikube.k8s.io/primary=true
	I1002 21:55:36.847878 1197405 ops.go:34] apiserver oom_adj: -16
	I1002 21:55:36.848019 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:37.348130 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:37.848587 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:38.349057 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:38.848786 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:39.348277 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:39.848415 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:40.088672 1197405 kubeadm.go:1113] duration metric: took 3.613704479s to wait for elevateKubeSystemPrivileges
	I1002 21:55:40.088724 1197405 kubeadm.go:402] duration metric: took 30.933175867s to StartCluster
	I1002 21:55:40.088744 1197405 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:40.088816 1197405 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:55:40.090523 1197405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:40.091031 1197405 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 21:55:40.091098 1197405 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:55:40.091345 1197405 config.go:182] Loaded profile config "default-k8s-diff-port-842185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:55:40.091402 1197405 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:55:40.091471 1197405 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-842185"
	I1002 21:55:40.091497 1197405 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-842185"
	I1002 21:55:40.091521 1197405 host.go:66] Checking if "default-k8s-diff-port-842185" exists ...
	I1002 21:55:40.091741 1197405 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-842185"
	I1002 21:55:40.091788 1197405 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-842185"
	I1002 21:55:40.092092 1197405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:55:40.092551 1197405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:55:40.097919 1197405 out.go:179] * Verifying Kubernetes components...
	I1002 21:55:40.108985 1197405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:55:40.137419 1197405 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1002 21:55:38.029658 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:40.030469 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	I1002 21:55:40.145301 1197405 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-842185"
	I1002 21:55:40.145341 1197405 host.go:66] Checking if "default-k8s-diff-port-842185" exists ...
	I1002 21:55:40.145790 1197405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:55:40.146986 1197405 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:55:40.147005 1197405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:55:40.147075 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:40.186732 1197405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34206 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:55:40.198277 1197405 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:55:40.198298 1197405 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:55:40.198362 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:40.222147 1197405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34206 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:55:40.606876 1197405 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:55:40.648595 1197405 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:55:40.740968 1197405 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 21:55:40.741160 1197405 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:55:41.941267 1197405 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.334303981s)
	I1002 21:55:42.509088 1197405 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.860420351s)
	I1002 21:55:42.509420 1197405 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.768130059s)
	I1002 21:55:42.509611 1197405 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.768566315s)
	I1002 21:55:42.509637 1197405 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1002 21:55:42.510795 1197405 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-842185" to be "Ready" ...
	I1002 21:55:42.513703 1197405 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1002 21:55:42.046109 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:44.529013 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	I1002 21:55:42.517203 1197405 addons.go:514] duration metric: took 2.425780848s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1002 21:55:43.015471 1197405 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-842185" context rescaled to 1 replicas
	W1002 21:55:44.513774 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:55:46.513970 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:55:46.529273 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:49.028764 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:49.013825 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:55:51.014317 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:55:51.529625 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:54.029606 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:53.514110 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:55:55.514638 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:55:56.529650 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:59.029013 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:58.013888 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:00.053535 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:01.530819 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:56:04.030189 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:56:06.031373 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:56:02.513618 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:04.514390 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	I1002 21:56:08.029249 1198906 pod_ready.go:94] pod "coredns-66bc5c9577-rl5vq" is "Ready"
	I1002 21:56:08.029279 1198906 pod_ready.go:86] duration metric: took 41.005766161s for pod "coredns-66bc5c9577-rl5vq" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.032058 1198906 pod_ready.go:83] waiting for pod "etcd-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.036587 1198906 pod_ready.go:94] pod "etcd-embed-certs-132977" is "Ready"
	I1002 21:56:08.036619 1198906 pod_ready.go:86] duration metric: took 4.528118ms for pod "etcd-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.039096 1198906 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.044031 1198906 pod_ready.go:94] pod "kube-apiserver-embed-certs-132977" is "Ready"
	I1002 21:56:08.044061 1198906 pod_ready.go:86] duration metric: took 4.938668ms for pod "kube-apiserver-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.046520 1198906 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.227163 1198906 pod_ready.go:94] pod "kube-controller-manager-embed-certs-132977" is "Ready"
	I1002 21:56:08.227190 1198906 pod_ready.go:86] duration metric: took 180.643151ms for pod "kube-controller-manager-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.427363 1198906 pod_ready.go:83] waiting for pod "kube-proxy-rslfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.827701 1198906 pod_ready.go:94] pod "kube-proxy-rslfw" is "Ready"
	I1002 21:56:08.827731 1198906 pod_ready.go:86] duration metric: took 400.339176ms for pod "kube-proxy-rslfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:09.026852 1198906 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:09.427392 1198906 pod_ready.go:94] pod "kube-scheduler-embed-certs-132977" is "Ready"
	I1002 21:56:09.427422 1198906 pod_ready.go:86] duration metric: took 400.541083ms for pod "kube-scheduler-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:09.427434 1198906 pod_ready.go:40] duration metric: took 42.423427643s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:56:09.484351 1198906 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:56:09.489520 1198906 out.go:179] * Done! kubectl is now configured to use "embed-certs-132977" cluster and "default" namespace by default
	W1002 21:56:07.014195 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:09.522741 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:12.014626 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:14.516118 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:17.014426 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:19.014529 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:21.017243 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 02 21:55:57 embed-certs-132977 crio[656]: time="2025-10-02T21:55:57.90243507Z" level=info msg="Created container 92fddbea2a089f752704ca9f13550483ecc7b7d873d9bba8a05566b4ac64aec8: kube-system/storage-provisioner/storage-provisioner" id=76873e2e-1265-4c1d-a2e7-ebb5e181b19b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:55:57 embed-certs-132977 crio[656]: time="2025-10-02T21:55:57.903503956Z" level=info msg="Starting container: 92fddbea2a089f752704ca9f13550483ecc7b7d873d9bba8a05566b4ac64aec8" id=34703225-1220-43f8-b590-1979d9bc9402 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:55:57 embed-certs-132977 crio[656]: time="2025-10-02T21:55:57.908185228Z" level=info msg="Started container" PID=1643 containerID=92fddbea2a089f752704ca9f13550483ecc7b7d873d9bba8a05566b4ac64aec8 description=kube-system/storage-provisioner/storage-provisioner id=34703225-1220-43f8-b590-1979d9bc9402 name=/runtime.v1.RuntimeService/StartContainer sandboxID=016619be059244c05400ac46c5d5f12aaa686d0f8a08381fbe7f9d11edef3d1b
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.692193386Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.699568363Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.699604489Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.69962734Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.702836511Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.702867796Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.702889859Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.705674836Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.705704751Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.705729571Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.709124789Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.709156295Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.605682801Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=63b50381-210e-45cb-ba11-315145ad6501 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.609083499Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=78059276-021e-44d6-8ccb-896c6babef4c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.612888405Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q/dashboard-metrics-scraper" id=ce72b1f0-41eb-48ab-b7b5-c587077b40ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.613306822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.628844719Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.630069106Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.654405237Z" level=info msg="Created container d570b1278bbcfa5977037093dc6c615be62338db7d396256f659a6baeb204a2d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q/dashboard-metrics-scraper" id=ce72b1f0-41eb-48ab-b7b5-c587077b40ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.655860485Z" level=info msg="Starting container: d570b1278bbcfa5977037093dc6c615be62338db7d396256f659a6baeb204a2d" id=6598b8e3-320e-4138-873e-5e2b37bdaa1e name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.658950324Z" level=info msg="Started container" PID=1763 containerID=d570b1278bbcfa5977037093dc6c615be62338db7d396256f659a6baeb204a2d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q/dashboard-metrics-scraper id=6598b8e3-320e-4138-873e-5e2b37bdaa1e name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c3e5620b0f29209de4700edd0b1e2bfe4d78d78ccb70ea36f06da3b059ee566
	Oct 02 21:56:21 embed-certs-132977 conmon[1761]: conmon d570b1278bbcfa597703 <ninfo>: container 1763 exited with status 1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d570b1278bbcf       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           2 seconds ago        Exited              dashboard-metrics-scraper   3                   5c3e5620b0f29       dashboard-metrics-scraper-6ffb444bf9-58v9q   kubernetes-dashboard
	92fddbea2a089       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   016619be05924       storage-provisioner                          kube-system
	e50c7081a6e25       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago       Exited              dashboard-metrics-scraper   2                   5c3e5620b0f29       dashboard-metrics-scraper-6ffb444bf9-58v9q   kubernetes-dashboard
	c07b17f26fd25       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago       Running             kubernetes-dashboard        0                   ee37b3c4a6fdf       kubernetes-dashboard-855c9754f9-pncmh        kubernetes-dashboard
	208585ddb52b7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   739efcab6bc55       coredns-66bc5c9577-rl5vq                     kube-system
	e6f1aefa88ce7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   c2a500c7b47a2       busybox                                      default
	2c425c30abeaf       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   bd69b0eaa803c       kube-proxy-rslfw                             kube-system
	7e5106a143779       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   016619be05924       storage-provisioner                          kube-system
	1fe1e6981e154       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   754cdbb60fce3       kindnet-p845j                                kube-system
	6d58e30d958ab       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6ddd6ada1c043       kube-controller-manager-embed-certs-132977   kube-system
	087df5d3fbc7a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   187f43369318e       kube-scheduler-embed-certs-132977            kube-system
	78533f77d4400       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   766e69b47deb3       kube-apiserver-embed-certs-132977            kube-system
	94bf7046df1f2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   6805590c13580       etcd-embed-certs-132977                      kube-system
	
	
	==> coredns [208585ddb52b7d92c0f5e71b6bc1c559b7735c239d80dd07f7714f9c3de4df6c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50463 - 8979 "HINFO IN 9164443044845730516.4692625434705116109. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025007498s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-132977
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-132977
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=embed-certs-132977
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_53_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:53:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-132977
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:56:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:55:54 +0000   Thu, 02 Oct 2025 21:53:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:55:54 +0000   Thu, 02 Oct 2025 21:53:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:55:54 +0000   Thu, 02 Oct 2025 21:53:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:55:54 +0000   Thu, 02 Oct 2025 21:54:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-132977
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 08f34a27958942128ae83ba1536ee2b9
	  System UUID:                3db3ea42-8592-4f96-865b-e348406b1a8e
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 coredns-66bc5c9577-rl5vq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m28s
	  kube-system                 etcd-embed-certs-132977                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m36s
	  kube-system                 kindnet-p845j                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m28s
	  kube-system                 kube-apiserver-embed-certs-132977             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-controller-manager-embed-certs-132977    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-proxy-rslfw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-scheduler-embed-certs-132977             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-58v9q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pncmh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m26s                  kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Normal   Starting                 2m46s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m46s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m46s (x8 over 2m46s)  kubelet          Node embed-certs-132977 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m46s (x8 over 2m46s)  kubelet          Node embed-certs-132977 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m46s (x8 over 2m46s)  kubelet          Node embed-certs-132977 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m34s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m33s                  kubelet          Node embed-certs-132977 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s                  kubelet          Node embed-certs-132977 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m33s                  kubelet          Node embed-certs-132977 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m29s                  node-controller  Node embed-certs-132977 event: Registered Node embed-certs-132977 in Controller
	  Normal   NodeReady                107s                   kubelet          Node embed-certs-132977 status is now: NodeReady
	  Normal   Starting                 70s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node embed-certs-132977 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node embed-certs-132977 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node embed-certs-132977 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node embed-certs-132977 event: Registered Node embed-certs-132977 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:21] overlayfs: idmapped layers are currently not supported
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +39.734560] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[  +6.128952] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[  +5.098616] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:55] overlayfs: idmapped layers are currently not supported
	[  +9.226554] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [94bf7046df1f24c3099c069c11a0e3c6a2875cedb8d4cf611d9c9244088e5b21] <==
	{"level":"warn","ts":"2025-10-02T21:55:19.605859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:19.678232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:19.754650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:19.808836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:19.828389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:19.890707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:19.946246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.028515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.078620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.159669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.170487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.230972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.270312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.339156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.374561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.434281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.464040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.522590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.559556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.610265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.668460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.750618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.829545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.835553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:21.062192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42826","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:56:24 up  6:38,  0 user,  load average: 3.96, 3.47, 2.32
	Linux embed-certs-132977 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1fe1e6981e154da7ba02891165e0d46656e53fec146079b96d413f11da41ddf8] <==
	I1002 21:55:26.511121       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:55:26.511525       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 21:55:26.511696       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:55:26.511948       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:55:26.512011       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:55:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:55:26.693827       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:55:26.693933       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:55:26.693967       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:55:26.706936       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:55:56.694720       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:55:56.707293       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 21:55:56.707293       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 21:55:56.707475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1002 21:55:58.194661       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:55:58.194719       1 metrics.go:72] Registering metrics
	I1002 21:55:58.194816       1 controller.go:711] "Syncing nftables rules"
	I1002 21:56:06.691830       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:56:06.691938       1 main.go:301] handling current node
	I1002 21:56:16.690909       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:56:16.690943       1 main.go:301] handling current node
	
	
	==> kube-apiserver [78533f77d44004e2358097b45d52a78adfc4483e84bf46617c6bb8b7536cf7ce] <==
	I1002 21:55:23.969655       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:55:23.990661       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 21:55:23.990686       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 21:55:23.991450       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 21:55:23.997105       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 21:55:24.001418       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:55:24.010187       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 21:55:24.010717       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 21:55:24.010782       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 21:55:24.010805       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 21:55:24.011389       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 21:55:24.011424       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:55:24.035859       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 21:55:24.036345       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:55:24.054525       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 21:55:25.089180       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 21:55:25.303731       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:55:25.528438       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:55:25.532723       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:55:25.759475       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:55:26.221096       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.204.166"}
	I1002 21:55:26.364836       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.16.54"}
	I1002 21:55:28.798733       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 21:55:28.848442       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:55:28.902455       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6d58e30d958abf78f9eae1abd463fddbeff48f6b25a431b738440cb44c27d524] <==
	I1002 21:55:28.449534       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 21:55:28.454182       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 21:55:28.459281       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 21:55:28.460551       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:55:28.461960       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 21:55:28.468388       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:55:28.474161       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 21:55:28.474287       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:55:28.474499       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 21:55:28.474550       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 21:55:28.479659       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 21:55:28.486307       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 21:55:28.486733       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 21:55:28.489979       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 21:55:28.490191       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 21:55:28.490297       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-132977"
	I1002 21:55:28.490378       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 21:55:28.491548       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 21:55:28.491997       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 21:55:28.498068       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 21:55:28.501189       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 21:55:28.503401       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 21:55:28.511714       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:55:28.511738       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:55:28.511761       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [2c425c30abeafa4b5be915f1755bea9cf00d3431b02ee8eeec9724a007378df4] <==
	I1002 21:55:27.351254       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:55:27.459785       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:55:27.577563       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:55:27.577689       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 21:55:27.577797       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:55:27.900414       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:55:27.900530       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:55:27.904518       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:55:27.904870       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:55:27.905045       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:55:27.906390       1 config.go:200] "Starting service config controller"
	I1002 21:55:27.945707       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:55:27.909344       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:55:27.956829       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:55:27.909372       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:55:27.956924       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:55:27.910352       1 config.go:309] "Starting node config controller"
	I1002 21:55:27.957040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:55:27.957070       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:55:28.046144       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:55:28.058103       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:55:28.058220       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [087df5d3fbc7ac3e91447e1eab8fa3241b3549986576b3ecc72ad7f333152d69] <==
	I1002 21:55:24.697746       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:55:27.971941       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:55:27.972052       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:55:27.976982       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 21:55:27.977023       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 21:55:27.977067       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:55:27.977075       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:55:27.977089       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:55:27.977102       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:55:27.978230       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:55:27.978548       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:55:28.077798       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:55:28.077900       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:55:28.077877       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 02 21:55:29 embed-certs-132977 kubelet[778]: I1002 21:55:29.125782     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2whp\" (UniqueName: \"kubernetes.io/projected/73a42dde-ac66-4212-a8db-b75958bd5bfb-kube-api-access-w2whp\") pod \"dashboard-metrics-scraper-6ffb444bf9-58v9q\" (UID: \"73a42dde-ac66-4212-a8db-b75958bd5bfb\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q"
	Oct 02 21:55:29 embed-certs-132977 kubelet[778]: W1002 21:55:29.339311     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/crio-5c3e5620b0f29209de4700edd0b1e2bfe4d78d78ccb70ea36f06da3b059ee566 WatchSource:0}: Error finding container 5c3e5620b0f29209de4700edd0b1e2bfe4d78d78ccb70ea36f06da3b059ee566: Status 404 returned error can't find the container with id 5c3e5620b0f29209de4700edd0b1e2bfe4d78d78ccb70ea36f06da3b059ee566
	Oct 02 21:55:29 embed-certs-132977 kubelet[778]: W1002 21:55:29.444709     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/crio-ee37b3c4a6fdfd1885ff4c5d35b701f8fec8f788a2b30075d3c2a37652ec2c53 WatchSource:0}: Error finding container ee37b3c4a6fdfd1885ff4c5d35b701f8fec8f788a2b30075d3c2a37652ec2c53: Status 404 returned error can't find the container with id ee37b3c4a6fdfd1885ff4c5d35b701f8fec8f788a2b30075d3c2a37652ec2c53
	Oct 02 21:55:35 embed-certs-132977 kubelet[778]: I1002 21:55:35.789594     778 scope.go:117] "RemoveContainer" containerID="8472ba3d5d573caeef9c7e6e58024a06a9d3f86711e9b4566e75ac0407596d0d"
	Oct 02 21:55:36 embed-certs-132977 kubelet[778]: I1002 21:55:36.801651     778 scope.go:117] "RemoveContainer" containerID="8472ba3d5d573caeef9c7e6e58024a06a9d3f86711e9b4566e75ac0407596d0d"
	Oct 02 21:55:36 embed-certs-132977 kubelet[778]: I1002 21:55:36.801831     778 scope.go:117] "RemoveContainer" containerID="a9f8e5f05867788daf8284f1f4aed921f831cccde2eded654bbc5bb4f19293a2"
	Oct 02 21:55:36 embed-certs-132977 kubelet[778]: E1002 21:55:36.802028     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-58v9q_kubernetes-dashboard(73a42dde-ac66-4212-a8db-b75958bd5bfb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q" podUID="73a42dde-ac66-4212-a8db-b75958bd5bfb"
	Oct 02 21:55:37 embed-certs-132977 kubelet[778]: I1002 21:55:37.799315     778 scope.go:117] "RemoveContainer" containerID="a9f8e5f05867788daf8284f1f4aed921f831cccde2eded654bbc5bb4f19293a2"
	Oct 02 21:55:37 embed-certs-132977 kubelet[778]: E1002 21:55:37.799465     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-58v9q_kubernetes-dashboard(73a42dde-ac66-4212-a8db-b75958bd5bfb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q" podUID="73a42dde-ac66-4212-a8db-b75958bd5bfb"
	Oct 02 21:55:39 embed-certs-132977 kubelet[778]: I1002 21:55:39.307653     778 scope.go:117] "RemoveContainer" containerID="a9f8e5f05867788daf8284f1f4aed921f831cccde2eded654bbc5bb4f19293a2"
	Oct 02 21:55:39 embed-certs-132977 kubelet[778]: E1002 21:55:39.307864     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-58v9q_kubernetes-dashboard(73a42dde-ac66-4212-a8db-b75958bd5bfb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q" podUID="73a42dde-ac66-4212-a8db-b75958bd5bfb"
	Oct 02 21:55:54 embed-certs-132977 kubelet[778]: I1002 21:55:54.605851     778 scope.go:117] "RemoveContainer" containerID="a9f8e5f05867788daf8284f1f4aed921f831cccde2eded654bbc5bb4f19293a2"
	Oct 02 21:55:54 embed-certs-132977 kubelet[778]: I1002 21:55:54.860459     778 scope.go:117] "RemoveContainer" containerID="a9f8e5f05867788daf8284f1f4aed921f831cccde2eded654bbc5bb4f19293a2"
	Oct 02 21:55:55 embed-certs-132977 kubelet[778]: I1002 21:55:55.864669     778 scope.go:117] "RemoveContainer" containerID="e50c7081a6e25a06b2903949fea5041e76da4fffc3c05ceeb25efd912768fa4a"
	Oct 02 21:55:55 embed-certs-132977 kubelet[778]: E1002 21:55:55.865354     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-58v9q_kubernetes-dashboard(73a42dde-ac66-4212-a8db-b75958bd5bfb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q" podUID="73a42dde-ac66-4212-a8db-b75958bd5bfb"
	Oct 02 21:55:55 embed-certs-132977 kubelet[778]: I1002 21:55:55.878980     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pncmh" podStartSLOduration=14.587826836 podStartE2EDuration="27.878962907s" podCreationTimestamp="2025-10-02 21:55:28 +0000 UTC" firstStartedPulling="2025-10-02 21:55:29.452478609 +0000 UTC m=+15.266277632" lastFinishedPulling="2025-10-02 21:55:42.74361468 +0000 UTC m=+28.557413703" observedRunningTime="2025-10-02 21:55:43.845277815 +0000 UTC m=+29.659076846" watchObservedRunningTime="2025-10-02 21:55:55.878962907 +0000 UTC m=+41.692761938"
	Oct 02 21:55:57 embed-certs-132977 kubelet[778]: I1002 21:55:57.871581     778 scope.go:117] "RemoveContainer" containerID="7e5106a143779b7fc7ed89dafef6b64d37b92e5587eb7415459d51a51a4ae669"
	Oct 02 21:55:59 embed-certs-132977 kubelet[778]: I1002 21:55:59.307732     778 scope.go:117] "RemoveContainer" containerID="e50c7081a6e25a06b2903949fea5041e76da4fffc3c05ceeb25efd912768fa4a"
	Oct 02 21:55:59 embed-certs-132977 kubelet[778]: E1002 21:55:59.308388     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-58v9q_kubernetes-dashboard(73a42dde-ac66-4212-a8db-b75958bd5bfb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q" podUID="73a42dde-ac66-4212-a8db-b75958bd5bfb"
	Oct 02 21:56:09 embed-certs-132977 kubelet[778]: I1002 21:56:09.604484     778 scope.go:117] "RemoveContainer" containerID="e50c7081a6e25a06b2903949fea5041e76da4fffc3c05ceeb25efd912768fa4a"
	Oct 02 21:56:09 embed-certs-132977 kubelet[778]: E1002 21:56:09.604649     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-58v9q_kubernetes-dashboard(73a42dde-ac66-4212-a8db-b75958bd5bfb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q" podUID="73a42dde-ac66-4212-a8db-b75958bd5bfb"
	Oct 02 21:56:21 embed-certs-132977 kubelet[778]: I1002 21:56:21.604465     778 scope.go:117] "RemoveContainer" containerID="e50c7081a6e25a06b2903949fea5041e76da4fffc3c05ceeb25efd912768fa4a"
	Oct 02 21:56:21 embed-certs-132977 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 21:56:21 embed-certs-132977 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 21:56:21 embed-certs-132977 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c07b17f26fd253ecdc768b4dfbf3f7cead72b2d49933fdc50538afedc65fbf0a] <==
	2025/10/02 21:55:42 Using namespace: kubernetes-dashboard
	2025/10/02 21:55:42 Using in-cluster config to connect to apiserver
	2025/10/02 21:55:42 Using secret token for csrf signing
	2025/10/02 21:55:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 21:55:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 21:55:42 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 21:55:42 Generating JWE encryption key
	2025/10/02 21:55:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 21:55:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 21:55:43 Initializing JWE encryption key from synchronized object
	2025/10/02 21:55:43 Creating in-cluster Sidecar client
	2025/10/02 21:55:43 Serving insecurely on HTTP port: 9090
	2025/10/02 21:55:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 21:56:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 21:55:42 Starting overwatch
	
	
	==> storage-provisioner [7e5106a143779b7fc7ed89dafef6b64d37b92e5587eb7415459d51a51a4ae669] <==
	I1002 21:55:26.907003       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 21:55:57.091597       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [92fddbea2a089f752704ca9f13550483ecc7b7d873d9bba8a05566b4ac64aec8] <==
	I1002 21:55:57.920565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 21:55:57.935696       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 21:55:57.935820       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 21:55:57.944437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:01.400505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:05.661112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:09.265220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:12.318439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:15.340834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:15.349329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:56:15.349587       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:56:15.350409       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-132977_65e14b61-60d3-450e-b093-71dc7183b622!
	I1002 21:56:15.350258       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5980ca91-fb93-47dd-a641-e89a0abe52d9", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-132977_65e14b61-60d3-450e-b093-71dc7183b622 became leader
	W1002 21:56:15.358570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:15.361245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:56:15.451604       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-132977_65e14b61-60d3-450e-b093-71dc7183b622!
	W1002 21:56:17.364536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:17.371453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:19.375188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:19.379469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:21.382684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:21.388816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:23.391963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:23.400490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-132977 -n embed-certs-132977
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-132977 -n embed-certs-132977: exit status 2 (393.787039ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-132977 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-132977
helpers_test.go:243: (dbg) docker inspect embed-certs-132977:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7",
	        "Created": "2025-10-02T21:53:21.268918022Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1199084,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:55:06.698703149Z",
	            "FinishedAt": "2025-10-02T21:55:05.709183448Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/hosts",
	        "LogPath": "/var/lib/docker/containers/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7-json.log",
	        "Name": "/embed-certs-132977",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-132977:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-132977",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7",
	                "LowerDir": "/var/lib/docker/overlay2/f2e833cdde4a6810938f6c941611af7faba9c713dcad91e7a73f1622564784a1-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2e833cdde4a6810938f6c941611af7faba9c713dcad91e7a73f1622564784a1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2e833cdde4a6810938f6c941611af7faba9c713dcad91e7a73f1622564784a1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2e833cdde4a6810938f6c941611af7faba9c713dcad91e7a73f1622564784a1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-132977",
	                "Source": "/var/lib/docker/volumes/embed-certs-132977/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-132977",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-132977",
	                "name.minikube.sigs.k8s.io": "embed-certs-132977",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cffca61b61958622d1028c9d7276194684131aeff27aa7ad7416380c29204d5d",
	            "SandboxKey": "/var/run/docker/netns/cffca61b6195",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34211"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34212"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34215"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34213"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34214"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-132977": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:85:39:a5:e4:73",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "09517ef0cb9cfbc2b4218dc316c6d2b554ca0576a9445b01545284a1bf270966",
	                    "EndpointID": "777717d75dc4e647f35276aaf6254b3606b58559b36ecdffb5a4328476358ffc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-132977",
	                        "3425438903cf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-132977 -n embed-certs-132977
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-132977 -n embed-certs-132977: exit status 2 (346.689266ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-132977 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-132977 logs -n 25: (1.335954484s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-714101 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:52 UTC │
	│ delete  │ -p cert-expiration-955864                                                                                                                                                                                                                     │ cert-expiration-955864       │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:51 UTC │
	│ start   │ -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:51 UTC │ 02 Oct 25 21:53 UTC │
	│ image   │ old-k8s-version-714101 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ pause   │ -p old-k8s-version-714101 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	│ delete  │ -p old-k8s-version-714101                                                                                                                                                                                                                     │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ delete  │ -p old-k8s-version-714101                                                                                                                                                                                                                     │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-661954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	│ stop    │ -p no-preload-661954 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ addons  │ enable dashboard -p no-preload-661954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ start   │ -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:54 UTC │
	│ image   │ no-preload-661954 image list --format=json                                                                                                                                                                                                    │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ pause   │ -p no-preload-661954 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-132977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	│ delete  │ -p no-preload-661954                                                                                                                                                                                                                          │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ stop    │ -p embed-certs-132977 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:55 UTC │
	│ delete  │ -p no-preload-661954                                                                                                                                                                                                                          │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ delete  │ -p disable-driver-mounts-013352                                                                                                                                                                                                               │ disable-driver-mounts-013352 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ start   │ -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-132977 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:55 UTC │
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:56 UTC │
	│ image   │ embed-certs-132977 image list --format=json                                                                                                                                                                                                   │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ pause   │ -p embed-certs-132977 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:55:06
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:55:06.318597 1198906 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:55:06.318746 1198906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:55:06.318770 1198906 out.go:374] Setting ErrFile to fd 2...
	I1002 21:55:06.318776 1198906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:55:06.319053 1198906 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:55:06.319517 1198906 out.go:368] Setting JSON to false
	I1002 21:55:06.320459 1198906 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23844,"bootTime":1759418263,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:55:06.320530 1198906 start.go:140] virtualization:  
	I1002 21:55:06.324044 1198906 out.go:179] * [embed-certs-132977] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:55:06.327161 1198906 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:55:06.327268 1198906 notify.go:221] Checking for updates...
	I1002 21:55:06.333179 1198906 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:55:06.336158 1198906 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:55:06.339260 1198906 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:55:06.342223 1198906 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:55:06.345201 1198906 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:55:06.349486 1198906 config.go:182] Loaded profile config "embed-certs-132977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:55:06.350148 1198906 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:55:06.389441 1198906 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:55:06.389566 1198906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:55:06.487282 1198906 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:55:06.474837196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:55:06.487407 1198906 docker.go:319] overlay module found
	I1002 21:55:06.490531 1198906 out.go:179] * Using the docker driver based on existing profile
	I1002 21:55:06.493436 1198906 start.go:306] selected driver: docker
	I1002 21:55:06.493472 1198906 start.go:936] validating driver "docker" against &{Name:embed-certs-132977 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-132977 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:55:06.493576 1198906 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:55:06.494434 1198906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:55:06.598451 1198906 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:55:06.58294128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:55:06.598826 1198906 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:55:06.598876 1198906 cni.go:84] Creating CNI manager for ""
	I1002 21:55:06.598943 1198906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:55:06.598995 1198906 start.go:350] cluster config:
	{Name:embed-certs-132977 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-132977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:55:06.602618 1198906 out.go:179] * Starting "embed-certs-132977" primary control-plane node in "embed-certs-132977" cluster
	I1002 21:55:06.605491 1198906 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:55:06.608521 1198906 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:55:06.611457 1198906 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:55:06.611533 1198906 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:55:06.611554 1198906 cache.go:59] Caching tarball of preloaded images
	I1002 21:55:06.611654 1198906 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:55:06.611708 1198906 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:55:06.611845 1198906 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/config.json ...
	I1002 21:55:06.612110 1198906 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:55:06.639569 1198906 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:55:06.639595 1198906 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:55:06.639609 1198906 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:55:06.639636 1198906 start.go:361] acquireMachinesLock for embed-certs-132977: {Name:mkeaddb5abf9563079c0434ecbd0586026902019 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:55:06.639703 1198906 start.go:365] duration metric: took 43.174µs to acquireMachinesLock for "embed-certs-132977"
	I1002 21:55:06.639726 1198906 start.go:97] Skipping create...Using existing machine configuration
	I1002 21:55:06.639737 1198906 fix.go:55] fixHost starting: 
	I1002 21:55:06.639997 1198906 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:55:06.659606 1198906 fix.go:113] recreateIfNeeded on embed-certs-132977: state=Stopped err=<nil>
	W1002 21:55:06.659639 1198906 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 21:55:02.328599 1197405 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-842185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.576656851s)
	I1002 21:55:02.328651 1197405 kic.go:203] duration metric: took 4.576833699s to extract preloaded images to volume ...
	W1002 21:55:02.328798 1197405 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:55:02.328906 1197405 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:55:02.395196 1197405 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-842185 --name default-k8s-diff-port-842185 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-842185 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-842185 --network default-k8s-diff-port-842185 --ip 192.168.85.2 --volume default-k8s-diff-port-842185:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:55:02.716817 1197405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Running}}
	I1002 21:55:02.735757 1197405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:55:02.761461 1197405 cli_runner.go:164] Run: docker exec default-k8s-diff-port-842185 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:55:02.814777 1197405 oci.go:144] the created container "default-k8s-diff-port-842185" has a running status.
	I1002 21:55:02.814808 1197405 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa...
	I1002 21:55:03.649322 1197405 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:55:03.668261 1197405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:55:03.687221 1197405 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:55:03.687246 1197405 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-842185 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:55:03.734694 1197405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:55:03.759868 1197405 machine.go:93] provisionDockerMachine start ...
	I1002 21:55:03.759978 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:03.785297 1197405 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:03.785637 1197405 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34206 <nil> <nil>}
	I1002 21:55:03.785648 1197405 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:55:03.925766 1197405 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-842185
	
	I1002 21:55:03.925788 1197405 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-842185"
	I1002 21:55:03.925860 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:03.949520 1197405 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:03.949830 1197405 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34206 <nil> <nil>}
	I1002 21:55:03.949843 1197405 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-842185 && echo "default-k8s-diff-port-842185" | sudo tee /etc/hostname
	I1002 21:55:04.096124 1197405 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-842185
	
	I1002 21:55:04.096209 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:04.114266 1197405 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:04.114573 1197405 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34206 <nil> <nil>}
	I1002 21:55:04.114597 1197405 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-842185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-842185/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-842185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:55:04.246418 1197405 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:55:04.246448 1197405 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:55:04.246468 1197405 ubuntu.go:190] setting up certificates
	I1002 21:55:04.246490 1197405 provision.go:84] configureAuth start
	I1002 21:55:04.246555 1197405 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-842185
	I1002 21:55:04.266152 1197405 provision.go:143] copyHostCerts
	I1002 21:55:04.266225 1197405 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:55:04.266242 1197405 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:55:04.266319 1197405 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:55:04.266419 1197405 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:55:04.266428 1197405 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:55:04.266453 1197405 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:55:04.266512 1197405 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:55:04.266520 1197405 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:55:04.266543 1197405 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:55:04.266600 1197405 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-842185 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-842185 localhost minikube]
	I1002 21:55:05.105871 1197405 provision.go:177] copyRemoteCerts
	I1002 21:55:05.105950 1197405 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:55:05.105995 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:05.125246 1197405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34206 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:55:05.221690 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:55:05.238844 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1002 21:55:05.256616 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:55:05.274738 1197405 provision.go:87] duration metric: took 1.028222796s to configureAuth
	I1002 21:55:05.274808 1197405 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:55:05.275024 1197405 config.go:182] Loaded profile config "default-k8s-diff-port-842185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:55:05.275176 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:05.291963 1197405 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:05.292277 1197405 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34206 <nil> <nil>}
	I1002 21:55:05.292298 1197405 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:55:05.554362 1197405 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:55:05.554385 1197405 machine.go:96] duration metric: took 1.794497187s to provisionDockerMachine
	I1002 21:55:05.554395 1197405 client.go:171] duration metric: took 8.492260211s to LocalClient.Create
	I1002 21:55:05.554408 1197405 start.go:168] duration metric: took 8.492375457s to libmachine.API.Create "default-k8s-diff-port-842185"
	I1002 21:55:05.554416 1197405 start.go:294] postStartSetup for "default-k8s-diff-port-842185" (driver="docker")
	I1002 21:55:05.554426 1197405 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:55:05.554490 1197405 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:55:05.554529 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:05.573186 1197405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34206 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:55:05.670327 1197405 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:55:05.673888 1197405 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:55:05.673936 1197405 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:55:05.673947 1197405 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:55:05.674008 1197405 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:55:05.674129 1197405 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:55:05.674236 1197405 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:55:05.681859 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:55:05.709765 1197405 start.go:297] duration metric: took 155.334317ms for postStartSetup
	I1002 21:55:05.710193 1197405 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-842185
	I1002 21:55:05.729568 1197405 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/config.json ...
	I1002 21:55:05.729847 1197405 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:55:05.729890 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:05.750665 1197405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34206 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:55:05.851879 1197405 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:55:05.859633 1197405 start.go:129] duration metric: took 8.801318523s to createHost
	I1002 21:55:05.859662 1197405 start.go:84] releasing machines lock for "default-k8s-diff-port-842185", held for 8.801449818s
	I1002 21:55:05.859748 1197405 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-842185
	I1002 21:55:05.878762 1197405 ssh_runner.go:195] Run: cat /version.json
	I1002 21:55:05.878822 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:05.879125 1197405 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:55:05.879188 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:05.920476 1197405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34206 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:55:05.935330 1197405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34206 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:55:06.034117 1197405 ssh_runner.go:195] Run: systemctl --version
	I1002 21:55:06.189025 1197405 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:55:06.236904 1197405 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:55:06.243484 1197405 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:55:06.243558 1197405 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:55:06.281278 1197405 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 21:55:06.281308 1197405 start.go:496] detecting cgroup driver to use...
	I1002 21:55:06.281349 1197405 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:55:06.281407 1197405 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:55:06.303190 1197405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:55:06.317521 1197405 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:55:06.317582 1197405 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:55:06.338220 1197405 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:55:06.362756 1197405 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:55:06.531468 1197405 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:55:06.691051 1197405 docker.go:234] disabling docker service ...
	I1002 21:55:06.691133 1197405 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:55:06.720127 1197405 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:55:06.736439 1197405 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:55:06.916669 1197405 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:55:07.113784 1197405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:55:07.130251 1197405 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:55:07.145894 1197405 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:55:07.145965 1197405 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:07.157116 1197405 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:55:07.157195 1197405 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:07.168286 1197405 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:07.178703 1197405 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:07.189262 1197405 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:55:07.198658 1197405 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:07.208390 1197405 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:07.224026 1197405 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:07.235663 1197405 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:55:07.247713 1197405 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:55:07.258622 1197405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:55:07.429782 1197405 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:55:07.634020 1197405 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:55:07.634174 1197405 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:55:07.643169 1197405 start.go:564] Will wait 60s for crictl version
	I1002 21:55:07.643288 1197405 ssh_runner.go:195] Run: which crictl
	I1002 21:55:07.647630 1197405 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:55:07.702342 1197405 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:55:07.702497 1197405 ssh_runner.go:195] Run: crio --version
	I1002 21:55:07.737472 1197405 ssh_runner.go:195] Run: crio --version
	I1002 21:55:07.772373 1197405 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:55:06.662971 1198906 out.go:252] * Restarting existing docker container for "embed-certs-132977" ...
	I1002 21:55:06.663075 1198906 cli_runner.go:164] Run: docker start embed-certs-132977
	I1002 21:55:06.993248 1198906 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:55:07.020341 1198906 kic.go:430] container "embed-certs-132977" state is running.
	I1002 21:55:07.020723 1198906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132977
	I1002 21:55:07.052377 1198906 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/config.json ...
	I1002 21:55:07.052602 1198906 machine.go:93] provisionDockerMachine start ...
	I1002 21:55:07.052662 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:07.094804 1198906 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:07.095116 1198906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34211 <nil> <nil>}
	I1002 21:55:07.095126 1198906 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:55:07.095818 1198906 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43124->127.0.0.1:34211: read: connection reset by peer
	I1002 21:55:10.258322 1198906 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-132977
	
	I1002 21:55:10.258406 1198906 ubuntu.go:182] provisioning hostname "embed-certs-132977"
	I1002 21:55:10.258515 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:10.285880 1198906 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:10.286263 1198906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34211 <nil> <nil>}
	I1002 21:55:10.286276 1198906 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-132977 && echo "embed-certs-132977" | sudo tee /etc/hostname
	I1002 21:55:10.438939 1198906 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-132977
	
	I1002 21:55:10.439065 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:10.471759 1198906 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:10.472097 1198906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34211 <nil> <nil>}
	I1002 21:55:10.472125 1198906 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-132977' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-132977/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-132977' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:55:10.626842 1198906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:55:10.626873 1198906 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:55:10.626902 1198906 ubuntu.go:190] setting up certificates
	I1002 21:55:10.626914 1198906 provision.go:84] configureAuth start
	I1002 21:55:10.626989 1198906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132977
	I1002 21:55:10.655798 1198906 provision.go:143] copyHostCerts
	I1002 21:55:10.655870 1198906 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:55:10.655892 1198906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:55:10.655982 1198906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:55:10.656095 1198906 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:55:10.656107 1198906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:55:10.656136 1198906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:55:10.656209 1198906 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:55:10.656219 1198906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:55:10.656245 1198906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:55:10.656297 1198906 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.embed-certs-132977 san=[127.0.0.1 192.168.76.2 embed-certs-132977 localhost minikube]
	I1002 21:55:11.117688 1198906 provision.go:177] copyRemoteCerts
	I1002 21:55:11.117760 1198906 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:55:11.117814 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:11.139934 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:11.243183 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:55:11.275985 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 21:55:11.298701 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:55:11.319666 1198906 provision.go:87] duration metric: took 692.733042ms to configureAuth
	I1002 21:55:11.319704 1198906 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:55:11.319905 1198906 config.go:182] Loaded profile config "embed-certs-132977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:55:11.320023 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:07.775336 1197405 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-842185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:55:07.791155 1197405 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 21:55:07.794986 1197405 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:55:07.804388 1197405 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-842185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842185 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:55:07.804502 1197405 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:55:07.804562 1197405 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:55:07.840843 1197405 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:55:07.840864 1197405 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:55:07.840918 1197405 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:55:07.866413 1197405 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:55:07.866435 1197405 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:55:07.866444 1197405 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1002 21:55:07.866528 1197405 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-842185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:55:07.866618 1197405 ssh_runner.go:195] Run: crio config
	I1002 21:55:07.919073 1197405 cni.go:84] Creating CNI manager for ""
	I1002 21:55:07.919105 1197405 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:55:07.919123 1197405 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:55:07.919146 1197405 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-842185 NodeName:default-k8s-diff-port-842185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:55:07.919298 1197405 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-842185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:55:07.919383 1197405 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:55:07.927156 1197405 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:55:07.927226 1197405 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:55:07.934749 1197405 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 21:55:07.947488 1197405 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:55:07.961243 1197405 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1002 21:55:07.974153 1197405 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:55:07.977659 1197405 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:55:07.987749 1197405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:55:08.100015 1197405 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:55:08.116257 1197405 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185 for IP: 192.168.85.2
	I1002 21:55:08.116280 1197405 certs.go:195] generating shared ca certs ...
	I1002 21:55:08.116296 1197405 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:08.116462 1197405 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:55:08.116530 1197405 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:55:08.116544 1197405 certs.go:257] generating profile certs ...
	I1002 21:55:08.116616 1197405 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.key
	I1002 21:55:08.116632 1197405 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.crt with IP's: []
	I1002 21:55:08.361821 1197405 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.crt ...
	I1002 21:55:08.361853 1197405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.crt: {Name:mk7432a8cdd18e2212383ff74a6157cd921bad72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:08.362092 1197405 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.key ...
	I1002 21:55:08.362111 1197405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.key: {Name:mk20a80d1291d28730b7eb8d1820c44a8dbd0bcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:08.362212 1197405 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.key.af0db507
	I1002 21:55:08.362231 1197405 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.crt.af0db507 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 21:55:08.663082 1197405 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.crt.af0db507 ...
	I1002 21:55:08.663117 1197405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.crt.af0db507: {Name:mk6ce5138fe023c7635cb852f6a370bcc28b6ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:08.663311 1197405 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.key.af0db507 ...
	I1002 21:55:08.663325 1197405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.key.af0db507: {Name:mk11039677f0c54458819516a143ebfda060be95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:08.663408 1197405 certs.go:382] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.crt.af0db507 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.crt
	I1002 21:55:08.663489 1197405 certs.go:386] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.key.af0db507 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.key
	I1002 21:55:08.663549 1197405 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.key
	I1002 21:55:08.663566 1197405 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.crt with IP's: []
	I1002 21:55:08.737594 1197405 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.crt ...
	I1002 21:55:08.737624 1197405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.crt: {Name:mk5008636127be678694cc7cb9c5fdf9c4d19c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:08.737796 1197405 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.key ...
	I1002 21:55:08.737811 1197405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.key: {Name:mk3e8ecafadac5c1d58c5a25f1ef491fb5dac0cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:08.737990 1197405 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:55:08.738057 1197405 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:55:08.738072 1197405 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:55:08.738106 1197405 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:55:08.738142 1197405 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:55:08.738176 1197405 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:55:08.738224 1197405 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:55:08.738931 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:55:08.756289 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:55:08.775158 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:55:08.793859 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:55:08.812278 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 21:55:08.829951 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:55:08.848279 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:55:08.865669 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:55:08.883600 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:55:08.901000 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:55:08.918791 1197405 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:55:08.936381 1197405 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:55:08.949603 1197405 ssh_runner.go:195] Run: openssl version
	I1002 21:55:08.956901 1197405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:55:08.969520 1197405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:55:08.974087 1197405 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:55:08.974208 1197405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:55:09.016312 1197405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:55:09.026917 1197405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:55:09.035157 1197405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:55:09.038886 1197405 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:55:09.038999 1197405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:55:09.081013 1197405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:55:09.089494 1197405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:55:09.097809 1197405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:55:09.101428 1197405 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:55:09.101536 1197405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:55:09.142642 1197405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:55:09.151830 1197405 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:55:09.155435 1197405 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:55:09.155540 1197405 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-842185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842185 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:55:09.155628 1197405 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:55:09.155696 1197405 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:55:09.183846 1197405 cri.go:89] found id: ""
	I1002 21:55:09.183943 1197405 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:55:09.191863 1197405 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:55:09.199727 1197405 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:55:09.199824 1197405 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:55:09.207519 1197405 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:55:09.207555 1197405 kubeadm.go:157] found existing configuration files:
	
	I1002 21:55:09.207638 1197405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1002 21:55:09.215463 1197405 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:55:09.215575 1197405 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:55:09.223251 1197405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1002 21:55:09.231144 1197405 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:55:09.231219 1197405 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:55:09.238724 1197405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1002 21:55:09.247223 1197405 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:55:09.247340 1197405 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:55:09.255409 1197405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1002 21:55:09.262921 1197405 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:55:09.263008 1197405 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:55:09.270667 1197405 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:55:09.336934 1197405 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:55:09.337186 1197405 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:55:09.403361 1197405 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:55:11.341937 1198906 main.go:141] libmachine: Using SSH client type: native
	I1002 21:55:11.342291 1198906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34211 <nil> <nil>}
	I1002 21:55:11.342313 1198906 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:55:11.681028 1198906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:55:11.681102 1198906 machine.go:96] duration metric: took 4.628489792s to provisionDockerMachine
	I1002 21:55:11.681142 1198906 start.go:294] postStartSetup for "embed-certs-132977" (driver="docker")
	I1002 21:55:11.681185 1198906 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:55:11.681290 1198906 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:55:11.681372 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:11.716730 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:11.826850 1198906 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:55:11.830899 1198906 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:55:11.830929 1198906 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:55:11.830940 1198906 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:55:11.830991 1198906 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:55:11.831067 1198906 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:55:11.831176 1198906 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:55:11.838769 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:55:11.856558 1198906 start.go:297] duration metric: took 175.369983ms for postStartSetup
	I1002 21:55:11.856728 1198906 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:55:11.856813 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:11.876422 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:11.975607 1198906 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:55:11.985181 1198906 fix.go:57] duration metric: took 5.345437121s for fixHost
	I1002 21:55:11.985209 1198906 start.go:84] releasing machines lock for "embed-certs-132977", held for 5.345493481s
	I1002 21:55:11.985282 1198906 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-132977
	I1002 21:55:12.012587 1198906 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:55:12.012655 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:12.012818 1198906 ssh_runner.go:195] Run: cat /version.json
	I1002 21:55:12.012879 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:12.057509 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:12.067956 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:12.174667 1198906 ssh_runner.go:195] Run: systemctl --version
	I1002 21:55:12.276530 1198906 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:55:12.329980 1198906 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:55:12.336909 1198906 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:55:12.336994 1198906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:55:12.350677 1198906 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:55:12.350719 1198906 start.go:496] detecting cgroup driver to use...
	I1002 21:55:12.350753 1198906 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:55:12.350811 1198906 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:55:12.373853 1198906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:55:12.392788 1198906 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:55:12.392887 1198906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:55:12.414999 1198906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:55:12.434368 1198906 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:55:12.604479 1198906 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:55:12.776957 1198906 docker.go:234] disabling docker service ...
	I1002 21:55:12.777047 1198906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:55:12.801041 1198906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:55:12.816397 1198906 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:55:12.961743 1198906 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:55:13.104311 1198906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:55:13.118908 1198906 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:55:13.133309 1198906 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:55:13.133424 1198906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:13.142133 1198906 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:55:13.142251 1198906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:13.152243 1198906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:13.161789 1198906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:13.171525 1198906 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:55:13.180798 1198906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:13.190697 1198906 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:13.200424 1198906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:55:13.212122 1198906 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:55:13.221495 1198906 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:55:13.238822 1198906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:55:13.400531 1198906 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:55:13.581247 1198906 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:55:13.581375 1198906 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:55:13.587114 1198906 start.go:564] Will wait 60s for crictl version
	I1002 21:55:13.587191 1198906 ssh_runner.go:195] Run: which crictl
	I1002 21:55:13.591557 1198906 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:55:13.636508 1198906 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:55:13.636606 1198906 ssh_runner.go:195] Run: crio --version
	I1002 21:55:13.668599 1198906 ssh_runner.go:195] Run: crio --version
	I1002 21:55:13.705619 1198906 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:55:13.708531 1198906 cli_runner.go:164] Run: docker network inspect embed-certs-132977 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:55:13.740416 1198906 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 21:55:13.744641 1198906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:55:13.769366 1198906 kubeadm.go:883] updating cluster {Name:embed-certs-132977 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-132977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:55:13.769479 1198906 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:55:13.769542 1198906 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:55:13.826511 1198906 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:55:13.826532 1198906 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:55:13.826594 1198906 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:55:13.854095 1198906 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:55:13.854115 1198906 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:55:13.854123 1198906 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 21:55:13.854224 1198906 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-132977 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-132977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:55:13.854302 1198906 ssh_runner.go:195] Run: crio config
	I1002 21:55:13.924225 1198906 cni.go:84] Creating CNI manager for ""
	I1002 21:55:13.924284 1198906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:55:13.924313 1198906 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:55:13.924364 1198906 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-132977 NodeName:embed-certs-132977 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:55:13.924518 1198906 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-132977"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:55:13.924603 1198906 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:55:13.932353 1198906 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:55:13.932470 1198906 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:55:13.942818 1198906 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1002 21:55:13.955958 1198906 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:55:13.972176 1198906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1002 21:55:13.985855 1198906 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:55:13.990074 1198906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:55:14.000764 1198906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:55:14.144471 1198906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:55:14.163458 1198906 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977 for IP: 192.168.76.2
	I1002 21:55:14.163485 1198906 certs.go:195] generating shared ca certs ...
	I1002 21:55:14.163505 1198906 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:14.163657 1198906 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:55:14.163721 1198906 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:55:14.163736 1198906 certs.go:257] generating profile certs ...
	I1002 21:55:14.163822 1198906 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/client.key
	I1002 21:55:14.163893 1198906 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/apiserver.key.8d55cb16
	I1002 21:55:14.163939 1198906 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/proxy-client.key
	I1002 21:55:14.164056 1198906 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:55:14.164090 1198906 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:55:14.164103 1198906 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:55:14.164128 1198906 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:55:14.164154 1198906 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:55:14.164179 1198906 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:55:14.164225 1198906 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:55:14.164778 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:55:14.186492 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:55:14.207752 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:55:14.227486 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:55:14.274551 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1002 21:55:14.303019 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:55:14.332219 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:55:14.364720 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/embed-certs-132977/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 21:55:14.411902 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:55:14.436603 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:55:14.491255 1198906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:55:14.535641 1198906 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:55:14.563295 1198906 ssh_runner.go:195] Run: openssl version
	I1002 21:55:14.582545 1198906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:55:14.598848 1198906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:55:14.608864 1198906 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:55:14.608974 1198906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:55:14.655282 1198906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:55:14.663576 1198906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:55:14.672022 1198906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:55:14.676396 1198906 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:55:14.676508 1198906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:55:14.732433 1198906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:55:14.741548 1198906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:55:14.749725 1198906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:55:14.753847 1198906 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:55:14.753988 1198906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:55:14.795673 1198906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:55:14.803945 1198906 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:55:14.808159 1198906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:55:14.852024 1198906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:55:14.896394 1198906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:55:14.939517 1198906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:55:14.981303 1198906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:55:15.057785 1198906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:55:15.149722 1198906 kubeadm.go:400] StartCluster: {Name:embed-certs-132977 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-132977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:55:15.149861 1198906 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:55:15.149971 1198906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:55:15.240480 1198906 cri.go:89] found id: ""
	I1002 21:55:15.240631 1198906 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:55:15.253260 1198906 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:55:15.253337 1198906 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:55:15.253430 1198906 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:55:15.290748 1198906 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:55:15.291217 1198906 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-132977" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:55:15.291390 1198906 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-992084/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-132977" cluster setting kubeconfig missing "embed-certs-132977" context setting]
	I1002 21:55:15.291713 1198906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:15.293206 1198906 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:55:15.323404 1198906 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 21:55:15.323478 1198906 kubeadm.go:601] duration metric: took 70.112584ms to restartPrimaryControlPlane
	I1002 21:55:15.323502 1198906 kubeadm.go:402] duration metric: took 173.789446ms to StartCluster
	I1002 21:55:15.323544 1198906 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:15.323621 1198906 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:55:15.324667 1198906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:15.324924 1198906 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:55:15.325246 1198906 config.go:182] Loaded profile config "embed-certs-132977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:55:15.325319 1198906 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:55:15.325519 1198906 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-132977"
	I1002 21:55:15.325550 1198906 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-132977"
	W1002 21:55:15.325614 1198906 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:55:15.325654 1198906 host.go:66] Checking if "embed-certs-132977" exists ...
	I1002 21:55:15.325585 1198906 addons.go:69] Setting dashboard=true in profile "embed-certs-132977"
	I1002 21:55:15.325767 1198906 addons.go:238] Setting addon dashboard=true in "embed-certs-132977"
	W1002 21:55:15.325774 1198906 addons.go:247] addon dashboard should already be in state true
	I1002 21:55:15.325846 1198906 host.go:66] Checking if "embed-certs-132977" exists ...
	I1002 21:55:15.326443 1198906 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:55:15.325596 1198906 addons.go:69] Setting default-storageclass=true in profile "embed-certs-132977"
	I1002 21:55:15.326952 1198906 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-132977"
	I1002 21:55:15.327242 1198906 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:55:15.327600 1198906 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:55:15.330108 1198906 out.go:179] * Verifying Kubernetes components...
	I1002 21:55:15.333475 1198906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:55:15.378615 1198906 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:55:15.384876 1198906 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:55:15.384901 1198906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:55:15.384967 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:15.390302 1198906 addons.go:238] Setting addon default-storageclass=true in "embed-certs-132977"
	W1002 21:55:15.390325 1198906 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:55:15.390349 1198906 host.go:66] Checking if "embed-certs-132977" exists ...
	I1002 21:55:15.390769 1198906 cli_runner.go:164] Run: docker container inspect embed-certs-132977 --format={{.State.Status}}
	I1002 21:55:15.398105 1198906 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 21:55:15.402190 1198906 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 21:55:15.405678 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 21:55:15.405708 1198906 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 21:55:15.405772 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:15.438271 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:15.438426 1198906 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:55:15.438441 1198906 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:55:15.438497 1198906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-132977
	I1002 21:55:15.460247 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:15.483944 1198906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34211 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/embed-certs-132977/id_rsa Username:docker}
	I1002 21:55:15.790316 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 21:55:15.790342 1198906 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 21:55:15.859986 1198906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:55:15.864526 1198906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:55:15.875582 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 21:55:15.875605 1198906 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 21:55:15.939188 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 21:55:15.939212 1198906 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 21:55:16.014804 1198906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:55:16.100539 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 21:55:16.100610 1198906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 21:55:16.235619 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 21:55:16.235640 1198906 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 21:55:16.327895 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 21:55:16.327919 1198906 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 21:55:16.400544 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 21:55:16.400567 1198906 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 21:55:16.451504 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 21:55:16.451529 1198906 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 21:55:16.484423 1198906 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:55:16.484449 1198906 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 21:55:16.534636 1198906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:55:24.015358 1198906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.15533452s)
	I1002 21:55:25.587198 1198906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.722637541s)
	I1002 21:55:25.587277 1198906 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.572402454s)
	I1002 21:55:25.587314 1198906 node_ready.go:35] waiting up to 6m0s for node "embed-certs-132977" to be "Ready" ...
	I1002 21:55:25.729388 1198906 node_ready.go:49] node "embed-certs-132977" is "Ready"
	I1002 21:55:25.729416 1198906 node_ready.go:38] duration metric: took 142.084984ms for node "embed-certs-132977" to be "Ready" ...
	I1002 21:55:25.729432 1198906 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:55:25.729489 1198906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:55:26.371937 1198906 api_server.go:72] duration metric: took 11.046955412s to wait for apiserver process to appear ...
	I1002 21:55:26.371963 1198906 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:55:26.371982 1198906 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:55:26.372883 1198906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.838207614s)
	I1002 21:55:26.387501 1198906 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:55:26.390280 1198906 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:55:26.401151 1198906 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-132977 addons enable metrics-server
	
	I1002 21:55:26.404602 1198906 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1002 21:55:26.406902 1198906 addons.go:514] duration metric: took 11.081554995s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1002 21:55:26.872953 1198906 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:55:26.895743 1198906 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 21:55:26.897417 1198906 api_server.go:141] control plane version: v1.34.1
	I1002 21:55:26.897444 1198906 api_server.go:131] duration metric: took 525.473495ms to wait for apiserver health ...
	I1002 21:55:26.897454 1198906 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:55:26.911517 1198906 system_pods.go:59] 8 kube-system pods found
	I1002 21:55:26.911556 1198906 system_pods.go:61] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:55:26.911584 1198906 system_pods.go:61] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:55:26.911601 1198906 system_pods.go:61] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 21:55:26.911610 1198906 system_pods.go:61] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:55:26.911630 1198906 system_pods.go:61] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:55:26.911636 1198906 system_pods.go:61] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:55:26.911659 1198906 system_pods.go:61] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:55:26.911668 1198906 system_pods.go:61] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Running
	I1002 21:55:26.911686 1198906 system_pods.go:74] duration metric: took 14.207552ms to wait for pod list to return data ...
	I1002 21:55:26.911701 1198906 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:55:26.935857 1198906 default_sa.go:45] found service account: "default"
	I1002 21:55:26.935895 1198906 default_sa.go:55] duration metric: took 24.186982ms for default service account to be created ...
	I1002 21:55:26.935905 1198906 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:55:26.949401 1198906 system_pods.go:86] 8 kube-system pods found
	I1002 21:55:26.949438 1198906 system_pods.go:89] "coredns-66bc5c9577-rl5vq" [ffda283c-d4c2-4713-ae8d-b471ae5f0646] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:55:26.949448 1198906 system_pods.go:89] "etcd-embed-certs-132977" [f6c946ef-5e6e-47d5-b5cb-e469005e9aa1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:55:26.949474 1198906 system_pods.go:89] "kindnet-p845j" [9b859d12-4b29-40f6-92a9-f8c597b013db] Running
	I1002 21:55:26.949488 1198906 system_pods.go:89] "kube-apiserver-embed-certs-132977" [99d9fa0c-0db6-4d8e-8173-2b86ed76fce4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:55:26.949496 1198906 system_pods.go:89] "kube-controller-manager-embed-certs-132977" [950a6da7-16ac-4667-9df4-44e825d7caae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:55:26.949505 1198906 system_pods.go:89] "kube-proxy-rslfw" [39333658-6649-4b03-931d-8a103fd98391] Running
	I1002 21:55:26.949513 1198906 system_pods.go:89] "kube-scheduler-embed-certs-132977" [c9eec321-e702-4f28-9e65-8cff8934efa3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:55:26.949523 1198906 system_pods.go:89] "storage-provisioner" [0ca82009-cfd7-4947-b4f7-2e5f033edac7] Running
	I1002 21:55:26.949531 1198906 system_pods.go:126] duration metric: took 13.602088ms to wait for k8s-apps to be running ...
	I1002 21:55:26.949557 1198906 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:55:26.949629 1198906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:55:26.980434 1198906 system_svc.go:56] duration metric: took 30.868045ms WaitForService to wait for kubelet
	I1002 21:55:26.980514 1198906 kubeadm.go:586] duration metric: took 11.65553571s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:55:26.980549 1198906 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:55:26.997172 1198906 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:55:26.997253 1198906 node_conditions.go:123] node cpu capacity is 2
	I1002 21:55:26.997281 1198906 node_conditions.go:105] duration metric: took 16.712119ms to run NodePressure ...
	I1002 21:55:26.997311 1198906 start.go:242] waiting for startup goroutines ...
	I1002 21:55:26.997352 1198906 start.go:247] waiting for cluster config update ...
	I1002 21:55:26.997378 1198906 start.go:256] writing updated cluster config ...
	I1002 21:55:26.997737 1198906 ssh_runner.go:195] Run: rm -f paused
	I1002 21:55:27.003915 1198906 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:55:27.023438 1198906 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rl5vq" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 21:55:29.060193 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	I1002 21:55:35.979466 1197405 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:55:35.979524 1197405 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:55:35.979615 1197405 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:55:35.979685 1197405 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:55:35.979723 1197405 kubeadm.go:318] OS: Linux
	I1002 21:55:35.979770 1197405 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:55:35.979820 1197405 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:55:35.979869 1197405 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:55:35.979919 1197405 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:55:35.979969 1197405 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:55:35.980022 1197405 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:55:35.980069 1197405 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:55:35.980119 1197405 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:55:35.980167 1197405 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:55:35.980241 1197405 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:55:35.980351 1197405 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:55:35.980444 1197405 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:55:35.980509 1197405 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:55:35.983998 1197405 out.go:252]   - Generating certificates and keys ...
	I1002 21:55:35.984092 1197405 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:55:35.984159 1197405 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:55:35.984230 1197405 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:55:35.984289 1197405 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:55:35.984353 1197405 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:55:35.984406 1197405 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:55:35.984469 1197405 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:55:35.984616 1197405 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-842185 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 21:55:35.984673 1197405 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:55:35.984808 1197405 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-842185 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 21:55:35.984882 1197405 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:55:35.984949 1197405 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:55:35.984995 1197405 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:55:35.985062 1197405 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:55:35.985117 1197405 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:55:35.985176 1197405 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:55:35.985234 1197405 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:55:35.985300 1197405 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:55:35.985357 1197405 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:55:35.985442 1197405 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:55:35.985512 1197405 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:55:35.988646 1197405 out.go:252]   - Booting up control plane ...
	I1002 21:55:35.988826 1197405 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:55:35.988958 1197405 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:55:35.989085 1197405 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:55:35.989209 1197405 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:55:35.989313 1197405 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:55:35.989428 1197405 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:55:35.989521 1197405 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:55:35.989565 1197405 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:55:35.989707 1197405 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:55:35.989829 1197405 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:55:35.989895 1197405 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 4.001041271s
	I1002 21:55:35.989997 1197405 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:55:35.990114 1197405 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1002 21:55:35.990214 1197405 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:55:35.990301 1197405 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:55:35.990385 1197405 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.369986048s
	I1002 21:55:35.990459 1197405 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.368376906s
	I1002 21:55:35.990534 1197405 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.502724799s
	I1002 21:55:35.990652 1197405 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:55:35.990790 1197405 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:55:35.990869 1197405 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:55:35.991092 1197405 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-842185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 21:55:35.991155 1197405 kubeadm.go:318] [bootstrap-token] Using token: l5i99u.q2o89w4rszqt38dw
	I1002 21:55:35.994303 1197405 out.go:252]   - Configuring RBAC rules ...
	I1002 21:55:35.994466 1197405 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:55:35.994607 1197405 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:55:35.994821 1197405 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:55:35.994967 1197405 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:55:35.995097 1197405 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:55:35.995193 1197405 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:55:35.995321 1197405 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:55:35.995370 1197405 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 21:55:35.995421 1197405 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 21:55:35.995425 1197405 kubeadm.go:318] 
	I1002 21:55:35.995492 1197405 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 21:55:35.995497 1197405 kubeadm.go:318] 
	I1002 21:55:35.995583 1197405 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 21:55:35.995587 1197405 kubeadm.go:318] 
	I1002 21:55:35.995616 1197405 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 21:55:35.995690 1197405 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:55:35.995753 1197405 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:55:35.995758 1197405 kubeadm.go:318] 
	I1002 21:55:35.995824 1197405 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 21:55:35.995828 1197405 kubeadm.go:318] 
	I1002 21:55:35.995881 1197405 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 21:55:35.995885 1197405 kubeadm.go:318] 
	I1002 21:55:35.995943 1197405 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 21:55:35.996029 1197405 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:55:35.996106 1197405 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:55:35.996110 1197405 kubeadm.go:318] 
	I1002 21:55:35.996204 1197405 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:55:35.996290 1197405 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 21:55:35.996294 1197405 kubeadm.go:318] 
	I1002 21:55:35.996388 1197405 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token l5i99u.q2o89w4rszqt38dw \
	I1002 21:55:35.996503 1197405 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 \
	I1002 21:55:35.996526 1197405 kubeadm.go:318] 	--control-plane 
	I1002 21:55:35.996530 1197405 kubeadm.go:318] 
	I1002 21:55:35.996625 1197405 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:55:35.996629 1197405 kubeadm.go:318] 
	I1002 21:55:35.996721 1197405 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token l5i99u.q2o89w4rszqt38dw \
	I1002 21:55:35.996847 1197405 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 
	I1002 21:55:35.996855 1197405 cni.go:84] Creating CNI manager for ""
	I1002 21:55:35.996863 1197405 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:55:36.000175 1197405 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1002 21:55:31.529837 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:33.530446 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:35.533070 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	I1002 21:55:36.003982 1197405 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:55:36.014108 1197405 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 21:55:36.014182 1197405 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 21:55:36.045979 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:55:36.474882 1197405 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:55:36.475041 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:36.475220 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-842185 minikube.k8s.io/updated_at=2025_10_02T21_55_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b minikube.k8s.io/name=default-k8s-diff-port-842185 minikube.k8s.io/primary=true
	I1002 21:55:36.847878 1197405 ops.go:34] apiserver oom_adj: -16
	I1002 21:55:36.848019 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:37.348130 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:37.848587 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:38.349057 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:38.848786 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:39.348277 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:39.848415 1197405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:55:40.088672 1197405 kubeadm.go:1113] duration metric: took 3.613704479s to wait for elevateKubeSystemPrivileges
	I1002 21:55:40.088724 1197405 kubeadm.go:402] duration metric: took 30.933175867s to StartCluster
	I1002 21:55:40.088744 1197405 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:40.088816 1197405 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:55:40.090523 1197405 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:55:40.091031 1197405 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 21:55:40.091098 1197405 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:55:40.091345 1197405 config.go:182] Loaded profile config "default-k8s-diff-port-842185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:55:40.091402 1197405 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:55:40.091471 1197405 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-842185"
	I1002 21:55:40.091497 1197405 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-842185"
	I1002 21:55:40.091521 1197405 host.go:66] Checking if "default-k8s-diff-port-842185" exists ...
	I1002 21:55:40.091741 1197405 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-842185"
	I1002 21:55:40.091788 1197405 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-842185"
	I1002 21:55:40.092092 1197405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:55:40.092551 1197405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:55:40.097919 1197405 out.go:179] * Verifying Kubernetes components...
	I1002 21:55:40.108985 1197405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:55:40.137419 1197405 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1002 21:55:38.029658 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:40.030469 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	I1002 21:55:40.145301 1197405 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-842185"
	I1002 21:55:40.145341 1197405 host.go:66] Checking if "default-k8s-diff-port-842185" exists ...
	I1002 21:55:40.145790 1197405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:55:40.146986 1197405 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:55:40.147005 1197405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:55:40.147075 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:40.186732 1197405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34206 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:55:40.198277 1197405 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:55:40.198298 1197405 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:55:40.198362 1197405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:55:40.222147 1197405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34206 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:55:40.606876 1197405 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:55:40.648595 1197405 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:55:40.740968 1197405 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 21:55:40.741160 1197405 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:55:41.941267 1197405 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.334303981s)
	I1002 21:55:42.509088 1197405 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.860420351s)
	I1002 21:55:42.509420 1197405 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.768130059s)
	I1002 21:55:42.509611 1197405 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.768566315s)
	I1002 21:55:42.509637 1197405 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1002 21:55:42.510795 1197405 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-842185" to be "Ready" ...
	I1002 21:55:42.513703 1197405 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1002 21:55:42.046109 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:44.529013 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	I1002 21:55:42.517203 1197405 addons.go:514] duration metric: took 2.425780848s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1002 21:55:43.015471 1197405 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-842185" context rescaled to 1 replicas
	W1002 21:55:44.513774 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:55:46.513970 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:55:46.529273 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:49.028764 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:49.013825 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:55:51.014317 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:55:51.529625 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:54.029606 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:53.514110 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:55:55.514638 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:55:56.529650 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:59.029013 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:55:58.013888 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:00.053535 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:01.530819 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:56:04.030189 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:56:06.031373 1198906 pod_ready.go:104] pod "coredns-66bc5c9577-rl5vq" is not "Ready", error: <nil>
	W1002 21:56:02.513618 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:04.514390 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	I1002 21:56:08.029249 1198906 pod_ready.go:94] pod "coredns-66bc5c9577-rl5vq" is "Ready"
	I1002 21:56:08.029279 1198906 pod_ready.go:86] duration metric: took 41.005766161s for pod "coredns-66bc5c9577-rl5vq" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.032058 1198906 pod_ready.go:83] waiting for pod "etcd-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.036587 1198906 pod_ready.go:94] pod "etcd-embed-certs-132977" is "Ready"
	I1002 21:56:08.036619 1198906 pod_ready.go:86] duration metric: took 4.528118ms for pod "etcd-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.039096 1198906 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.044031 1198906 pod_ready.go:94] pod "kube-apiserver-embed-certs-132977" is "Ready"
	I1002 21:56:08.044061 1198906 pod_ready.go:86] duration metric: took 4.938668ms for pod "kube-apiserver-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.046520 1198906 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.227163 1198906 pod_ready.go:94] pod "kube-controller-manager-embed-certs-132977" is "Ready"
	I1002 21:56:08.227190 1198906 pod_ready.go:86] duration metric: took 180.643151ms for pod "kube-controller-manager-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.427363 1198906 pod_ready.go:83] waiting for pod "kube-proxy-rslfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:08.827701 1198906 pod_ready.go:94] pod "kube-proxy-rslfw" is "Ready"
	I1002 21:56:08.827731 1198906 pod_ready.go:86] duration metric: took 400.339176ms for pod "kube-proxy-rslfw" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:09.026852 1198906 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:09.427392 1198906 pod_ready.go:94] pod "kube-scheduler-embed-certs-132977" is "Ready"
	I1002 21:56:09.427422 1198906 pod_ready.go:86] duration metric: took 400.541083ms for pod "kube-scheduler-embed-certs-132977" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:09.427434 1198906 pod_ready.go:40] duration metric: took 42.423427643s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:56:09.484351 1198906 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:56:09.489520 1198906 out.go:179] * Done! kubectl is now configured to use "embed-certs-132977" cluster and "default" namespace by default
	W1002 21:56:07.014195 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:09.522741 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:12.014626 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:14.516118 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:17.014426 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:19.014529 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	W1002 21:56:21.017243 1197405 node_ready.go:57] node "default-k8s-diff-port-842185" has "Ready":"False" status (will retry)
	I1002 21:56:22.514402 1197405 node_ready.go:49] node "default-k8s-diff-port-842185" is "Ready"
	I1002 21:56:22.514445 1197405 node_ready.go:38] duration metric: took 40.003609792s for node "default-k8s-diff-port-842185" to be "Ready" ...
	I1002 21:56:22.514459 1197405 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:56:22.514528 1197405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:56:22.534227 1197405 api_server.go:72] duration metric: took 42.442964077s to wait for apiserver process to appear ...
	I1002 21:56:22.534260 1197405 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:56:22.534280 1197405 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1002 21:56:22.544582 1197405 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1002 21:56:22.546530 1197405 api_server.go:141] control plane version: v1.34.1
	I1002 21:56:22.546557 1197405 api_server.go:131] duration metric: took 12.290499ms to wait for apiserver health ...
	I1002 21:56:22.546575 1197405 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:56:22.550340 1197405 system_pods.go:59] 8 kube-system pods found
	I1002 21:56:22.550375 1197405 system_pods.go:61] "coredns-66bc5c9577-5hq6c" [f7ff6f37-0c61-4d47-9268-a767da1b2975] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:56:22.550383 1197405 system_pods.go:61] "etcd-default-k8s-diff-port-842185" [2583503c-a0ec-4ccf-a798-8c474b2d2ca0] Running
	I1002 21:56:22.550388 1197405 system_pods.go:61] "kindnet-qb4vm" [a0408fba-6828-4f17-beb9-7c9d8c06aadb] Running
	I1002 21:56:22.550393 1197405 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-842185" [639f7e81-9c1b-4b09-b244-f84494a340da] Running
	I1002 21:56:22.550398 1197405 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-842185" [37104cf8-9137-492e-984f-c242ceb7c6cd] Running
	I1002 21:56:22.550402 1197405 system_pods.go:61] "kube-proxy-vhggd" [e1640af2-e216-46de-9e27-823b1ba83051] Running
	I1002 21:56:22.550407 1197405 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-842185" [bd62d81a-1b1b-4645-b66e-55cc7f7cc002] Running
	I1002 21:56:22.550413 1197405 system_pods.go:61] "storage-provisioner" [dfbacdc1-e0d1-4a90-9786-25439ee46f26] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:56:22.550419 1197405 system_pods.go:74] duration metric: took 3.83939ms to wait for pod list to return data ...
	I1002 21:56:22.550427 1197405 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:56:22.553459 1197405 default_sa.go:45] found service account: "default"
	I1002 21:56:22.553538 1197405 default_sa.go:55] duration metric: took 3.10377ms for default service account to be created ...
	I1002 21:56:22.553571 1197405 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:56:22.559346 1197405 system_pods.go:86] 8 kube-system pods found
	I1002 21:56:22.559384 1197405 system_pods.go:89] "coredns-66bc5c9577-5hq6c" [f7ff6f37-0c61-4d47-9268-a767da1b2975] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:56:22.559397 1197405 system_pods.go:89] "etcd-default-k8s-diff-port-842185" [2583503c-a0ec-4ccf-a798-8c474b2d2ca0] Running
	I1002 21:56:22.559433 1197405 system_pods.go:89] "kindnet-qb4vm" [a0408fba-6828-4f17-beb9-7c9d8c06aadb] Running
	I1002 21:56:22.559457 1197405 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-842185" [639f7e81-9c1b-4b09-b244-f84494a340da] Running
	I1002 21:56:22.559463 1197405 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-842185" [37104cf8-9137-492e-984f-c242ceb7c6cd] Running
	I1002 21:56:22.559468 1197405 system_pods.go:89] "kube-proxy-vhggd" [e1640af2-e216-46de-9e27-823b1ba83051] Running
	I1002 21:56:22.559483 1197405 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-842185" [bd62d81a-1b1b-4645-b66e-55cc7f7cc002] Running
	I1002 21:56:22.559515 1197405 system_pods.go:89] "storage-provisioner" [dfbacdc1-e0d1-4a90-9786-25439ee46f26] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:56:22.559542 1197405 retry.go:31] will retry after 310.75269ms: missing components: kube-dns
	I1002 21:56:22.874658 1197405 system_pods.go:86] 8 kube-system pods found
	I1002 21:56:22.874688 1197405 system_pods.go:89] "coredns-66bc5c9577-5hq6c" [f7ff6f37-0c61-4d47-9268-a767da1b2975] Running
	I1002 21:56:22.874695 1197405 system_pods.go:89] "etcd-default-k8s-diff-port-842185" [2583503c-a0ec-4ccf-a798-8c474b2d2ca0] Running
	I1002 21:56:22.874701 1197405 system_pods.go:89] "kindnet-qb4vm" [a0408fba-6828-4f17-beb9-7c9d8c06aadb] Running
	I1002 21:56:22.874706 1197405 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-842185" [639f7e81-9c1b-4b09-b244-f84494a340da] Running
	I1002 21:56:22.874712 1197405 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-842185" [37104cf8-9137-492e-984f-c242ceb7c6cd] Running
	I1002 21:56:22.874720 1197405 system_pods.go:89] "kube-proxy-vhggd" [e1640af2-e216-46de-9e27-823b1ba83051] Running
	I1002 21:56:22.874725 1197405 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-842185" [bd62d81a-1b1b-4645-b66e-55cc7f7cc002] Running
	I1002 21:56:22.874741 1197405 system_pods.go:89] "storage-provisioner" [dfbacdc1-e0d1-4a90-9786-25439ee46f26] Running
	I1002 21:56:22.874749 1197405 system_pods.go:126] duration metric: took 321.139053ms to wait for k8s-apps to be running ...
	I1002 21:56:22.874757 1197405 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:56:22.874816 1197405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:56:22.888959 1197405 system_svc.go:56] duration metric: took 14.191283ms WaitForService to wait for kubelet
	I1002 21:56:22.889029 1197405 kubeadm.go:586] duration metric: took 42.797770913s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:56:22.889063 1197405 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:56:22.893075 1197405 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:56:22.893109 1197405 node_conditions.go:123] node cpu capacity is 2
	I1002 21:56:22.893124 1197405 node_conditions.go:105] duration metric: took 4.039057ms to run NodePressure ...
	I1002 21:56:22.893164 1197405 start.go:242] waiting for startup goroutines ...
	I1002 21:56:22.893179 1197405 start.go:247] waiting for cluster config update ...
	I1002 21:56:22.893191 1197405 start.go:256] writing updated cluster config ...
	I1002 21:56:22.893485 1197405 ssh_runner.go:195] Run: rm -f paused
	I1002 21:56:22.897840 1197405 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:56:22.902136 1197405 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5hq6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:22.907662 1197405 pod_ready.go:94] pod "coredns-66bc5c9577-5hq6c" is "Ready"
	I1002 21:56:22.907687 1197405 pod_ready.go:86] duration metric: took 5.524614ms for pod "coredns-66bc5c9577-5hq6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:22.910148 1197405 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:22.914895 1197405 pod_ready.go:94] pod "etcd-default-k8s-diff-port-842185" is "Ready"
	I1002 21:56:22.914925 1197405 pod_ready.go:86] duration metric: took 4.753491ms for pod "etcd-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:22.917287 1197405 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:22.921917 1197405 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-842185" is "Ready"
	I1002 21:56:22.921945 1197405 pod_ready.go:86] duration metric: took 4.635013ms for pod "kube-apiserver-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:22.924356 1197405 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:23.302385 1197405 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-842185" is "Ready"
	I1002 21:56:23.302415 1197405 pod_ready.go:86] duration metric: took 378.032425ms for pod "kube-controller-manager-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:23.504990 1197405 pod_ready.go:83] waiting for pod "kube-proxy-vhggd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:23.903126 1197405 pod_ready.go:94] pod "kube-proxy-vhggd" is "Ready"
	I1002 21:56:23.903150 1197405 pod_ready.go:86] duration metric: took 398.13803ms for pod "kube-proxy-vhggd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:24.103434 1197405 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:24.502676 1197405 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-842185" is "Ready"
	I1002 21:56:24.502711 1197405 pod_ready.go:86] duration metric: took 399.248244ms for pod "kube-scheduler-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:56:24.502723 1197405 pod_ready.go:40] duration metric: took 1.60479878s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:56:24.571241 1197405 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:56:24.575058 1197405 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-842185" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 21:55:57 embed-certs-132977 crio[656]: time="2025-10-02T21:55:57.90243507Z" level=info msg="Created container 92fddbea2a089f752704ca9f13550483ecc7b7d873d9bba8a05566b4ac64aec8: kube-system/storage-provisioner/storage-provisioner" id=76873e2e-1265-4c1d-a2e7-ebb5e181b19b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:55:57 embed-certs-132977 crio[656]: time="2025-10-02T21:55:57.903503956Z" level=info msg="Starting container: 92fddbea2a089f752704ca9f13550483ecc7b7d873d9bba8a05566b4ac64aec8" id=34703225-1220-43f8-b590-1979d9bc9402 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:55:57 embed-certs-132977 crio[656]: time="2025-10-02T21:55:57.908185228Z" level=info msg="Started container" PID=1643 containerID=92fddbea2a089f752704ca9f13550483ecc7b7d873d9bba8a05566b4ac64aec8 description=kube-system/storage-provisioner/storage-provisioner id=34703225-1220-43f8-b590-1979d9bc9402 name=/runtime.v1.RuntimeService/StartContainer sandboxID=016619be059244c05400ac46c5d5f12aaa686d0f8a08381fbe7f9d11edef3d1b
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.692193386Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.699568363Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.699604489Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.69962734Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.702836511Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.702867796Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.702889859Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.705674836Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.705704751Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.705729571Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.709124789Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:56:06 embed-certs-132977 crio[656]: time="2025-10-02T21:56:06.709156295Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.605682801Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=63b50381-210e-45cb-ba11-315145ad6501 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.609083499Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=78059276-021e-44d6-8ccb-896c6babef4c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.612888405Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q/dashboard-metrics-scraper" id=ce72b1f0-41eb-48ab-b7b5-c587077b40ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.613306822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.628844719Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.630069106Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.654405237Z" level=info msg="Created container d570b1278bbcfa5977037093dc6c615be62338db7d396256f659a6baeb204a2d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q/dashboard-metrics-scraper" id=ce72b1f0-41eb-48ab-b7b5-c587077b40ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.655860485Z" level=info msg="Starting container: d570b1278bbcfa5977037093dc6c615be62338db7d396256f659a6baeb204a2d" id=6598b8e3-320e-4138-873e-5e2b37bdaa1e name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:56:21 embed-certs-132977 crio[656]: time="2025-10-02T21:56:21.658950324Z" level=info msg="Started container" PID=1763 containerID=d570b1278bbcfa5977037093dc6c615be62338db7d396256f659a6baeb204a2d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q/dashboard-metrics-scraper id=6598b8e3-320e-4138-873e-5e2b37bdaa1e name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c3e5620b0f29209de4700edd0b1e2bfe4d78d78ccb70ea36f06da3b059ee566
	Oct 02 21:56:21 embed-certs-132977 conmon[1761]: conmon d570b1278bbcfa597703 <ninfo>: container 1763 exited with status 1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d570b1278bbcf       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago        Exited              dashboard-metrics-scraper   3                   5c3e5620b0f29       dashboard-metrics-scraper-6ffb444bf9-58v9q   kubernetes-dashboard
	92fddbea2a089       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   016619be05924       storage-provisioner                          kube-system
	e50c7081a6e25       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago       Exited              dashboard-metrics-scraper   2                   5c3e5620b0f29       dashboard-metrics-scraper-6ffb444bf9-58v9q   kubernetes-dashboard
	c07b17f26fd25       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   ee37b3c4a6fdf       kubernetes-dashboard-855c9754f9-pncmh        kubernetes-dashboard
	208585ddb52b7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   739efcab6bc55       coredns-66bc5c9577-rl5vq                     kube-system
	e6f1aefa88ce7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   c2a500c7b47a2       busybox                                      default
	2c425c30abeaf       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   bd69b0eaa803c       kube-proxy-rslfw                             kube-system
	7e5106a143779       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   016619be05924       storage-provisioner                          kube-system
	1fe1e6981e154       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   754cdbb60fce3       kindnet-p845j                                kube-system
	6d58e30d958ab       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6ddd6ada1c043       kube-controller-manager-embed-certs-132977   kube-system
	087df5d3fbc7a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   187f43369318e       kube-scheduler-embed-certs-132977            kube-system
	78533f77d4400       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   766e69b47deb3       kube-apiserver-embed-certs-132977            kube-system
	94bf7046df1f2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   6805590c13580       etcd-embed-certs-132977                      kube-system
	
	
	==> coredns [208585ddb52b7d92c0f5e71b6bc1c559b7735c239d80dd07f7714f9c3de4df6c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50463 - 8979 "HINFO IN 9164443044845730516.4692625434705116109. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025007498s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-132977
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-132977
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=embed-certs-132977
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_53_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:53:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-132977
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:56:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:55:54 +0000   Thu, 02 Oct 2025 21:53:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:55:54 +0000   Thu, 02 Oct 2025 21:53:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:55:54 +0000   Thu, 02 Oct 2025 21:53:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:55:54 +0000   Thu, 02 Oct 2025 21:54:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-132977
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 08f34a27958942128ae83ba1536ee2b9
	  System UUID:                3db3ea42-8592-4f96-865b-e348406b1a8e
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 coredns-66bc5c9577-rl5vq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m30s
	  kube-system                 etcd-embed-certs-132977                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m38s
	  kube-system                 kindnet-p845j                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m30s
	  kube-system                 kube-apiserver-embed-certs-132977             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-controller-manager-embed-certs-132977    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-proxy-rslfw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-scheduler-embed-certs-132977             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-58v9q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pncmh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m28s                  kube-proxy       
	  Normal   Starting                 59s                    kube-proxy       
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m48s (x8 over 2m48s)  kubelet          Node embed-certs-132977 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x8 over 2m48s)  kubelet          Node embed-certs-132977 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x8 over 2m48s)  kubelet          Node embed-certs-132977 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m36s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m35s                  kubelet          Node embed-certs-132977 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s                  kubelet          Node embed-certs-132977 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m35s                  kubelet          Node embed-certs-132977 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m31s                  node-controller  Node embed-certs-132977 event: Registered Node embed-certs-132977 in Controller
	  Normal   NodeReady                109s                   kubelet          Node embed-certs-132977 status is now: NodeReady
	  Normal   Starting                 72s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 72s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)      kubelet          Node embed-certs-132977 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)      kubelet          Node embed-certs-132977 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)      kubelet          Node embed-certs-132977 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node embed-certs-132977 event: Registered Node embed-certs-132977 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:21] overlayfs: idmapped layers are currently not supported
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +39.734560] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[  +6.128952] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[  +5.098616] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:55] overlayfs: idmapped layers are currently not supported
	[  +9.226554] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [94bf7046df1f24c3099c069c11a0e3c6a2875cedb8d4cf611d9c9244088e5b21] <==
	{"level":"warn","ts":"2025-10-02T21:55:19.605859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:19.678232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:19.754650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:19.808836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:19.828389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:19.890707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:19.946246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.028515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.078620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.159669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.170487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.230972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.270312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.339156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.374561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.434281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.464040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.522590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.559556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.610265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.668460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.750618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.829545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:20.835553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:21.062192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42826","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:56:27 up  6:38,  0 user,  load average: 3.96, 3.47, 2.32
	Linux embed-certs-132977 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1fe1e6981e154da7ba02891165e0d46656e53fec146079b96d413f11da41ddf8] <==
	I1002 21:55:26.511121       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:55:26.511525       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 21:55:26.511696       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:55:26.511948       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:55:26.512011       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:55:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:55:26.693827       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:55:26.693933       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:55:26.693967       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:55:26.706936       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:55:56.694720       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:55:56.707293       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 21:55:56.707293       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 21:55:56.707475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1002 21:55:58.194661       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:55:58.194719       1 metrics.go:72] Registering metrics
	I1002 21:55:58.194816       1 controller.go:711] "Syncing nftables rules"
	I1002 21:56:06.691830       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:56:06.691938       1 main.go:301] handling current node
	I1002 21:56:16.690909       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:56:16.690943       1 main.go:301] handling current node
	I1002 21:56:26.695399       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 21:56:26.695446       1 main.go:301] handling current node
	
	
	==> kube-apiserver [78533f77d44004e2358097b45d52a78adfc4483e84bf46617c6bb8b7536cf7ce] <==
	I1002 21:55:23.969655       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:55:23.990661       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 21:55:23.990686       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 21:55:23.991450       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 21:55:23.997105       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 21:55:24.001418       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:55:24.010187       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 21:55:24.010717       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 21:55:24.010782       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 21:55:24.010805       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 21:55:24.011389       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 21:55:24.011424       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:55:24.035859       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 21:55:24.036345       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:55:24.054525       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 21:55:25.089180       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 21:55:25.303731       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:55:25.528438       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:55:25.532723       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:55:25.759475       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:55:26.221096       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.204.166"}
	I1002 21:55:26.364836       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.16.54"}
	I1002 21:55:28.798733       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 21:55:28.848442       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:55:28.902455       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6d58e30d958abf78f9eae1abd463fddbeff48f6b25a431b738440cb44c27d524] <==
	I1002 21:55:28.449534       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 21:55:28.454182       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 21:55:28.459281       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 21:55:28.460551       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:55:28.461960       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 21:55:28.468388       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:55:28.474161       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 21:55:28.474287       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:55:28.474499       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 21:55:28.474550       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 21:55:28.479659       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 21:55:28.486307       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 21:55:28.486733       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 21:55:28.489979       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 21:55:28.490191       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 21:55:28.490297       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-132977"
	I1002 21:55:28.490378       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 21:55:28.491548       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 21:55:28.491997       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 21:55:28.498068       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 21:55:28.501189       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 21:55:28.503401       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 21:55:28.511714       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:55:28.511738       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:55:28.511761       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [2c425c30abeafa4b5be915f1755bea9cf00d3431b02ee8eeec9724a007378df4] <==
	I1002 21:55:27.351254       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:55:27.459785       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:55:27.577563       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:55:27.577689       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 21:55:27.577797       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:55:27.900414       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:55:27.900530       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:55:27.904518       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:55:27.904870       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:55:27.905045       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:55:27.906390       1 config.go:200] "Starting service config controller"
	I1002 21:55:27.945707       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:55:27.909344       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:55:27.956829       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:55:27.909372       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:55:27.956924       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:55:27.910352       1 config.go:309] "Starting node config controller"
	I1002 21:55:27.957040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:55:27.957070       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:55:28.046144       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:55:28.058103       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:55:28.058220       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [087df5d3fbc7ac3e91447e1eab8fa3241b3549986576b3ecc72ad7f333152d69] <==
	I1002 21:55:24.697746       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:55:27.971941       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:55:27.972052       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:55:27.976982       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 21:55:27.977023       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 21:55:27.977067       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:55:27.977075       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:55:27.977089       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:55:27.977102       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:55:27.978230       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:55:27.978548       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:55:28.077798       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:55:28.077900       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:55:28.077877       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 02 21:55:29 embed-certs-132977 kubelet[778]: I1002 21:55:29.125782     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2whp\" (UniqueName: \"kubernetes.io/projected/73a42dde-ac66-4212-a8db-b75958bd5bfb-kube-api-access-w2whp\") pod \"dashboard-metrics-scraper-6ffb444bf9-58v9q\" (UID: \"73a42dde-ac66-4212-a8db-b75958bd5bfb\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q"
	Oct 02 21:55:29 embed-certs-132977 kubelet[778]: W1002 21:55:29.339311     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/crio-5c3e5620b0f29209de4700edd0b1e2bfe4d78d78ccb70ea36f06da3b059ee566 WatchSource:0}: Error finding container 5c3e5620b0f29209de4700edd0b1e2bfe4d78d78ccb70ea36f06da3b059ee566: Status 404 returned error can't find the container with id 5c3e5620b0f29209de4700edd0b1e2bfe4d78d78ccb70ea36f06da3b059ee566
	Oct 02 21:55:29 embed-certs-132977 kubelet[778]: W1002 21:55:29.444709     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3425438903cfe2cd4b287315529f3782ca4c82fcf960ab6727c3759a86b489a7/crio-ee37b3c4a6fdfd1885ff4c5d35b701f8fec8f788a2b30075d3c2a37652ec2c53 WatchSource:0}: Error finding container ee37b3c4a6fdfd1885ff4c5d35b701f8fec8f788a2b30075d3c2a37652ec2c53: Status 404 returned error can't find the container with id ee37b3c4a6fdfd1885ff4c5d35b701f8fec8f788a2b30075d3c2a37652ec2c53
	Oct 02 21:55:35 embed-certs-132977 kubelet[778]: I1002 21:55:35.789594     778 scope.go:117] "RemoveContainer" containerID="8472ba3d5d573caeef9c7e6e58024a06a9d3f86711e9b4566e75ac0407596d0d"
	Oct 02 21:55:36 embed-certs-132977 kubelet[778]: I1002 21:55:36.801651     778 scope.go:117] "RemoveContainer" containerID="8472ba3d5d573caeef9c7e6e58024a06a9d3f86711e9b4566e75ac0407596d0d"
	Oct 02 21:55:36 embed-certs-132977 kubelet[778]: I1002 21:55:36.801831     778 scope.go:117] "RemoveContainer" containerID="a9f8e5f05867788daf8284f1f4aed921f831cccde2eded654bbc5bb4f19293a2"
	Oct 02 21:55:36 embed-certs-132977 kubelet[778]: E1002 21:55:36.802028     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-58v9q_kubernetes-dashboard(73a42dde-ac66-4212-a8db-b75958bd5bfb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q" podUID="73a42dde-ac66-4212-a8db-b75958bd5bfb"
	Oct 02 21:55:37 embed-certs-132977 kubelet[778]: I1002 21:55:37.799315     778 scope.go:117] "RemoveContainer" containerID="a9f8e5f05867788daf8284f1f4aed921f831cccde2eded654bbc5bb4f19293a2"
	Oct 02 21:55:37 embed-certs-132977 kubelet[778]: E1002 21:55:37.799465     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-58v9q_kubernetes-dashboard(73a42dde-ac66-4212-a8db-b75958bd5bfb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q" podUID="73a42dde-ac66-4212-a8db-b75958bd5bfb"
	Oct 02 21:55:39 embed-certs-132977 kubelet[778]: I1002 21:55:39.307653     778 scope.go:117] "RemoveContainer" containerID="a9f8e5f05867788daf8284f1f4aed921f831cccde2eded654bbc5bb4f19293a2"
	Oct 02 21:55:39 embed-certs-132977 kubelet[778]: E1002 21:55:39.307864     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-58v9q_kubernetes-dashboard(73a42dde-ac66-4212-a8db-b75958bd5bfb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q" podUID="73a42dde-ac66-4212-a8db-b75958bd5bfb"
	Oct 02 21:55:54 embed-certs-132977 kubelet[778]: I1002 21:55:54.605851     778 scope.go:117] "RemoveContainer" containerID="a9f8e5f05867788daf8284f1f4aed921f831cccde2eded654bbc5bb4f19293a2"
	Oct 02 21:55:54 embed-certs-132977 kubelet[778]: I1002 21:55:54.860459     778 scope.go:117] "RemoveContainer" containerID="a9f8e5f05867788daf8284f1f4aed921f831cccde2eded654bbc5bb4f19293a2"
	Oct 02 21:55:55 embed-certs-132977 kubelet[778]: I1002 21:55:55.864669     778 scope.go:117] "RemoveContainer" containerID="e50c7081a6e25a06b2903949fea5041e76da4fffc3c05ceeb25efd912768fa4a"
	Oct 02 21:55:55 embed-certs-132977 kubelet[778]: E1002 21:55:55.865354     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-58v9q_kubernetes-dashboard(73a42dde-ac66-4212-a8db-b75958bd5bfb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q" podUID="73a42dde-ac66-4212-a8db-b75958bd5bfb"
	Oct 02 21:55:55 embed-certs-132977 kubelet[778]: I1002 21:55:55.878980     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pncmh" podStartSLOduration=14.587826836 podStartE2EDuration="27.878962907s" podCreationTimestamp="2025-10-02 21:55:28 +0000 UTC" firstStartedPulling="2025-10-02 21:55:29.452478609 +0000 UTC m=+15.266277632" lastFinishedPulling="2025-10-02 21:55:42.74361468 +0000 UTC m=+28.557413703" observedRunningTime="2025-10-02 21:55:43.845277815 +0000 UTC m=+29.659076846" watchObservedRunningTime="2025-10-02 21:55:55.878962907 +0000 UTC m=+41.692761938"
	Oct 02 21:55:57 embed-certs-132977 kubelet[778]: I1002 21:55:57.871581     778 scope.go:117] "RemoveContainer" containerID="7e5106a143779b7fc7ed89dafef6b64d37b92e5587eb7415459d51a51a4ae669"
	Oct 02 21:55:59 embed-certs-132977 kubelet[778]: I1002 21:55:59.307732     778 scope.go:117] "RemoveContainer" containerID="e50c7081a6e25a06b2903949fea5041e76da4fffc3c05ceeb25efd912768fa4a"
	Oct 02 21:55:59 embed-certs-132977 kubelet[778]: E1002 21:55:59.308388     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-58v9q_kubernetes-dashboard(73a42dde-ac66-4212-a8db-b75958bd5bfb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q" podUID="73a42dde-ac66-4212-a8db-b75958bd5bfb"
	Oct 02 21:56:09 embed-certs-132977 kubelet[778]: I1002 21:56:09.604484     778 scope.go:117] "RemoveContainer" containerID="e50c7081a6e25a06b2903949fea5041e76da4fffc3c05ceeb25efd912768fa4a"
	Oct 02 21:56:09 embed-certs-132977 kubelet[778]: E1002 21:56:09.604649     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-58v9q_kubernetes-dashboard(73a42dde-ac66-4212-a8db-b75958bd5bfb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-58v9q" podUID="73a42dde-ac66-4212-a8db-b75958bd5bfb"
	Oct 02 21:56:21 embed-certs-132977 kubelet[778]: I1002 21:56:21.604465     778 scope.go:117] "RemoveContainer" containerID="e50c7081a6e25a06b2903949fea5041e76da4fffc3c05ceeb25efd912768fa4a"
	Oct 02 21:56:21 embed-certs-132977 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 21:56:21 embed-certs-132977 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 21:56:21 embed-certs-132977 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c07b17f26fd253ecdc768b4dfbf3f7cead72b2d49933fdc50538afedc65fbf0a] <==
	2025/10/02 21:55:42 Using namespace: kubernetes-dashboard
	2025/10/02 21:55:42 Using in-cluster config to connect to apiserver
	2025/10/02 21:55:42 Using secret token for csrf signing
	2025/10/02 21:55:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 21:55:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 21:55:42 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 21:55:42 Generating JWE encryption key
	2025/10/02 21:55:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 21:55:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 21:55:43 Initializing JWE encryption key from synchronized object
	2025/10/02 21:55:43 Creating in-cluster Sidecar client
	2025/10/02 21:55:43 Serving insecurely on HTTP port: 9090
	2025/10/02 21:55:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 21:56:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 21:55:42 Starting overwatch
	
	
	==> storage-provisioner [7e5106a143779b7fc7ed89dafef6b64d37b92e5587eb7415459d51a51a4ae669] <==
	I1002 21:55:26.907003       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 21:55:57.091597       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [92fddbea2a089f752704ca9f13550483ecc7b7d873d9bba8a05566b4ac64aec8] <==
	W1002 21:55:57.944437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:01.400505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:05.661112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:09.265220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:12.318439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:15.340834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:15.349329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:56:15.349587       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:56:15.350409       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-132977_65e14b61-60d3-450e-b093-71dc7183b622!
	I1002 21:56:15.350258       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5980ca91-fb93-47dd-a641-e89a0abe52d9", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-132977_65e14b61-60d3-450e-b093-71dc7183b622 became leader
	W1002 21:56:15.358570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:15.361245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:56:15.451604       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-132977_65e14b61-60d3-450e-b093-71dc7183b622!
	W1002 21:56:17.364536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:17.371453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:19.375188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:19.379469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:21.382684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:21.388816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:23.391963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:23.400490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:25.403215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:25.411750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:27.414783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:27.419922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-132977 -n embed-certs-132977
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-132977 -n embed-certs-132977: exit status 2 (363.689902ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-132977 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-842185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1002 21:56:34.286354  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-842185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (380.227807ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:56:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-842185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-842185 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-842185 describe deploy/metrics-server -n kube-system: exit status 1 (119.406248ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-842185 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-842185
E1002 21:56:34.540503  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-842185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f",
	        "Created": "2025-10-02T21:55:02.411044691Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1197796,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:55:02.475663232Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/hostname",
	        "HostsPath": "/var/lib/docker/containers/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/hosts",
	        "LogPath": "/var/lib/docker/containers/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f-json.log",
	        "Name": "/default-k8s-diff-port-842185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-842185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-842185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f",
	                "LowerDir": "/var/lib/docker/overlay2/40b28f1c7350f57a8441dbd43f58861ac9ba8246d909b8ff6eee8c63bf13ca6d-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/40b28f1c7350f57a8441dbd43f58861ac9ba8246d909b8ff6eee8c63bf13ca6d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/40b28f1c7350f57a8441dbd43f58861ac9ba8246d909b8ff6eee8c63bf13ca6d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/40b28f1c7350f57a8441dbd43f58861ac9ba8246d909b8ff6eee8c63bf13ca6d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-842185",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-842185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-842185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-842185",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-842185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "66c6c2a77acdf3be764f558dd80ee394130635b79ad77708bd6cefee483015d3",
	            "SandboxKey": "/var/run/docker/netns/66c6c2a77acd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34206"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34207"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34210"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34208"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34209"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-842185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:84:27:02:86:d4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c75f49aff1de19eab04e162890223324556cf47bb7a7732a62f8c3500b677819",
	                    "EndpointID": "8896c84efcbe31ff6454e25ccb2aad04d1cce6498bf39eb7f1be79d1b32a6de5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-842185",
	                        "724f09ef6992"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842185 -n default-k8s-diff-port-842185
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-842185 logs -n 25
E1002 21:56:35.567651  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-842185 logs -n 25: (1.853463421s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-714101 image list --format=json                                                                                                                                                                                               │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ pause   │ -p old-k8s-version-714101 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	│ delete  │ -p old-k8s-version-714101                                                                                                                                                                                                                     │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ delete  │ -p old-k8s-version-714101                                                                                                                                                                                                                     │ old-k8s-version-714101       │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-661954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	│ stop    │ -p no-preload-661954 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ addons  │ enable dashboard -p no-preload-661954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ start   │ -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:54 UTC │
	│ image   │ no-preload-661954 image list --format=json                                                                                                                                                                                                    │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ pause   │ -p no-preload-661954 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-132977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	│ delete  │ -p no-preload-661954                                                                                                                                                                                                                          │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ stop    │ -p embed-certs-132977 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:55 UTC │
	│ delete  │ -p no-preload-661954                                                                                                                                                                                                                          │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ delete  │ -p disable-driver-mounts-013352                                                                                                                                                                                                               │ disable-driver-mounts-013352 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ start   │ -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-132977 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:55 UTC │
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:56 UTC │
	│ image   │ embed-certs-132977 image list --format=json                                                                                                                                                                                                   │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ pause   │ -p embed-certs-132977 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	│ delete  │ -p embed-certs-132977                                                                                                                                                                                                                         │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ delete  │ -p embed-certs-132977                                                                                                                                                                                                                         │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ start   │ -p newest-cni-161621 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-842185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:56:30
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:56:30.939347 1203635 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:56:30.939533 1203635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:56:30.939560 1203635 out.go:374] Setting ErrFile to fd 2...
	I1002 21:56:30.939580 1203635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:56:30.939889 1203635 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:56:30.940442 1203635 out.go:368] Setting JSON to false
	I1002 21:56:30.941544 1203635 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23928,"bootTime":1759418263,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:56:30.941642 1203635 start.go:140] virtualization:  
	I1002 21:56:30.945687 1203635 out.go:179] * [newest-cni-161621] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:56:30.950140 1203635 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:56:30.950257 1203635 notify.go:221] Checking for updates...
	I1002 21:56:30.957174 1203635 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:56:30.960318 1203635 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:56:30.963633 1203635 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:56:30.966672 1203635 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:56:30.969702 1203635 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:56:30.973164 1203635 config.go:182] Loaded profile config "default-k8s-diff-port-842185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:56:30.973321 1203635 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:56:31.006930 1203635 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:56:31.007075 1203635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:56:31.067038 1203635 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:56:31.057256353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:56:31.067159 1203635 docker.go:319] overlay module found
	I1002 21:56:31.070401 1203635 out.go:179] * Using the docker driver based on user configuration
	I1002 21:56:31.073329 1203635 start.go:306] selected driver: docker
	I1002 21:56:31.073355 1203635 start.go:936] validating driver "docker" against <nil>
	I1002 21:56:31.073368 1203635 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:56:31.074179 1203635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:56:31.131945 1203635 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:56:31.121876226 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:56:31.132109 1203635 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1002 21:56:31.132184 1203635 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1002 21:56:31.132578 1203635 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 21:56:31.135082 1203635 out.go:179] * Using Docker driver with root privileges
	I1002 21:56:31.138130 1203635 cni.go:84] Creating CNI manager for ""
	I1002 21:56:31.138320 1203635 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:56:31.138344 1203635 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:56:31.138542 1203635 start.go:350] cluster config:
	{Name:newest-cni-161621 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-161621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:56:31.142190 1203635 out.go:179] * Starting "newest-cni-161621" primary control-plane node in "newest-cni-161621" cluster
	I1002 21:56:31.145180 1203635 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:56:31.148241 1203635 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:56:31.151260 1203635 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:56:31.151326 1203635 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:56:31.151340 1203635 cache.go:59] Caching tarball of preloaded images
	I1002 21:56:31.151368 1203635 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:56:31.151437 1203635 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:56:31.151448 1203635 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:56:31.151558 1203635 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/config.json ...
	I1002 21:56:31.151580 1203635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/config.json: {Name:mk9e15793754f8b88a7c20d3860192adce224f2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:56:31.173768 1203635 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:56:31.173791 1203635 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:56:31.173810 1203635 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:56:31.173834 1203635 start.go:361] acquireMachinesLock for newest-cni-161621: {Name:mk369c5d3d45aed0e984b21d641c17abd7d1dc57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:56:31.173953 1203635 start.go:365] duration metric: took 98.631µs to acquireMachinesLock for "newest-cni-161621"
	I1002 21:56:31.173983 1203635 start.go:94] Provisioning new machine with config: &{Name:newest-cni-161621 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-161621 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:56:31.174152 1203635 start.go:126] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 02 21:56:22 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:22.625415493Z" level=info msg="Created container a47d11e9290d4de6776f7392538bda35cacf6500c39f1d26932a9394021cb798: kube-system/coredns-66bc5c9577-5hq6c/coredns" id=e59518dd-6a95-48f8-a142-df88f4b1da43 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:56:22 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:22.626297152Z" level=info msg="Starting container: a47d11e9290d4de6776f7392538bda35cacf6500c39f1d26932a9394021cb798" id=23a61073-cecf-4841-9a3b-2518f51a439b name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:56:22 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:22.631560939Z" level=info msg="Started container" PID=1738 containerID=a47d11e9290d4de6776f7392538bda35cacf6500c39f1d26932a9394021cb798 description=kube-system/coredns-66bc5c9577-5hq6c/coredns id=23a61073-cecf-4841-9a3b-2518f51a439b name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d02532228bbad502a28acf4a3d1206cbcdcf811665e7aae9e0bf598f7a7110a
	Oct 02 21:56:25 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:25.192685299Z" level=info msg="Running pod sandbox: default/busybox/POD" id=994d20eb-89e2-4097-93bf-842d37ce0b38 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:56:25 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:25.192760234Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:56:25 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:25.206514621Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e7e34b4a5025b0da73147b0aaeece63fe65cc6db952c923284c981761eb4e800 UID:df2c8518-e488-49e5-ad02-f5d32c72a262 NetNS:/var/run/netns/239fe1a8-4ab0-4773-bf11-48c2fe9a0e25 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004caff8}] Aliases:map[]}"
	Oct 02 21:56:25 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:25.206557123Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 02 21:56:25 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:25.226709258Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e7e34b4a5025b0da73147b0aaeece63fe65cc6db952c923284c981761eb4e800 UID:df2c8518-e488-49e5-ad02-f5d32c72a262 NetNS:/var/run/netns/239fe1a8-4ab0-4773-bf11-48c2fe9a0e25 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004caff8}] Aliases:map[]}"
	Oct 02 21:56:25 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:25.226850407Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 02 21:56:25 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:25.235104393Z" level=info msg="Ran pod sandbox e7e34b4a5025b0da73147b0aaeece63fe65cc6db952c923284c981761eb4e800 with infra container: default/busybox/POD" id=994d20eb-89e2-4097-93bf-842d37ce0b38 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:56:25 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:25.239148849Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=745637f8-7184-4438-acd2-2ba2d430ee0a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:56:25 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:25.239284837Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=745637f8-7184-4438-acd2-2ba2d430ee0a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:56:25 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:25.239326723Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=745637f8-7184-4438-acd2-2ba2d430ee0a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:56:25 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:25.240546071Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d81c0d74-7378-4d5b-b4e9-039c7139ac62 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:56:25 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:25.251248239Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 02 21:56:27 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:27.489291896Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=d81c0d74-7378-4d5b-b4e9-039c7139ac62 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:56:27 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:27.490540429Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=66cea428-f241-49d3-bd56-01234074694c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:56:27 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:27.493984548Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dd616902-f107-4f07-8e98-fb68ac29773a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:56:27 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:27.501827616Z" level=info msg="Creating container: default/busybox/busybox" id=9f4b2943-9817-4938-9a65-7d9938cc2500 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:56:27 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:27.502920625Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:56:27 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:27.526180764Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:56:27 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:27.52677146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:56:27 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:27.551814929Z" level=info msg="Created container 3bad460b57dba5b8a1b53db5e313ba9eb554606dded17c0a4895e04156f2fb10: default/busybox/busybox" id=9f4b2943-9817-4938-9a65-7d9938cc2500 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:56:27 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:27.557112874Z" level=info msg="Starting container: 3bad460b57dba5b8a1b53db5e313ba9eb554606dded17c0a4895e04156f2fb10" id=dac46618-1d65-4c1c-92a4-35b41b79bdfd name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:56:27 default-k8s-diff-port-842185 crio[839]: time="2025-10-02T21:56:27.559138273Z" level=info msg="Started container" PID=1796 containerID=3bad460b57dba5b8a1b53db5e313ba9eb554606dded17c0a4895e04156f2fb10 description=default/busybox/busybox id=dac46618-1d65-4c1c-92a4-35b41b79bdfd name=/runtime.v1.RuntimeService/StartContainer sandboxID=e7e34b4a5025b0da73147b0aaeece63fe65cc6db952c923284c981761eb4e800
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	3bad460b57dba       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   e7e34b4a5025b       busybox                                                default
	a47d11e9290d4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   6d02532228bba       coredns-66bc5c9577-5hq6c                               kube-system
	e255be7d8f7ba       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   91c32bcd8f2ab       storage-provisioner                                    kube-system
	267049ec7a288       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   0058d579f36fa       kube-proxy-vhggd                                       kube-system
	73fcf45bb2cf9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   b9d7c6717b761       kindnet-qb4vm                                          kube-system
	f23a5987dbd0a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   10ab2360c722d       kube-scheduler-default-k8s-diff-port-842185            kube-system
	514a42654de95       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   6688347b1d8ba       kube-apiserver-default-k8s-diff-port-842185            kube-system
	ca31ee49ccc4f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   ab5605a667bd8       kube-controller-manager-default-k8s-diff-port-842185   kube-system
	e2c923fbde10d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   4b63538e1f60d       etcd-default-k8s-diff-port-842185                      kube-system
	
	
	==> coredns [a47d11e9290d4de6776f7392538bda35cacf6500c39f1d26932a9394021cb798] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45574 - 63015 "HINFO IN 5005484170995273885.579196671148020984. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.028719098s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-842185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-842185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=default-k8s-diff-port-842185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_55_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:55:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-842185
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:56:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:56:22 +0000   Thu, 02 Oct 2025 21:55:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:56:22 +0000   Thu, 02 Oct 2025 21:55:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:56:22 +0000   Thu, 02 Oct 2025 21:55:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:56:22 +0000   Thu, 02 Oct 2025 21:56:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-842185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 9d6f896e8f764a1ea96104b3d0bc43c2
	  System UUID:                aa48841e-0403-43a7-8420-f3cab19a557a
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-5hq6c                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-842185                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-qb4vm                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-842185             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-842185    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-vhggd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-842185             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x8 over 73s)  kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node default-k8s-diff-port-842185 event: Registered Node default-k8s-diff-port-842185 in Controller
	  Normal   NodeReady                14s                kubelet          Node default-k8s-diff-port-842185 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 21:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:21] overlayfs: idmapped layers are currently not supported
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +39.734560] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[  +6.128952] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[  +5.098616] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:55] overlayfs: idmapped layers are currently not supported
	[  +9.226554] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e2c923fbde10d0452650ac8edc7687654d81c2722144e7b474b5d9faa41a3278] <==
	{"level":"warn","ts":"2025-10-02T21:55:29.805553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:29.829080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:29.859712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:29.890229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:29.917836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:29.949033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.009125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.034296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.087437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.131953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.163730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.209337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.257659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.322549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.363052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.380090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.407034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.450887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.478468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.511939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.551405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.581210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.610149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.649332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:55:30.802444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38708","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:56:36 up  6:38,  0 user,  load average: 3.73, 3.43, 2.33
	Linux default-k8s-diff-port-842185 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [73fcf45bb2cf97a5655fe7f30eb940be2cc9c4d658463c24b575872c629a8268] <==
	I1002 21:55:41.515167       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:55:41.515426       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 21:55:41.515549       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:55:41.515560       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:55:41.515573       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:55:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:55:41.772203       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:55:41.772294       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:55:41.772307       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:55:41.773384       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:56:11.772598       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 21:56:11.773586       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 21:56:11.773601       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:56:11.773686       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 21:56:12.972518       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:56:12.972645       1 metrics.go:72] Registering metrics
	I1002 21:56:12.972785       1 controller.go:711] "Syncing nftables rules"
	I1002 21:56:21.774408       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:56:21.774457       1 main.go:301] handling current node
	I1002 21:56:31.774297       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:56:31.774331       1 main.go:301] handling current node
	
	
	==> kube-apiserver [514a42654de959c9daf7cc513435b7e3ab32ead81e39ac8fb1fdf1bbe004ed3f] <==
	I1002 21:55:32.230686       1 policy_source.go:240] refreshing policies
	I1002 21:55:32.230729       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 21:55:32.315450       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 21:55:32.352878       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:55:32.353394       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1002 21:55:32.361492       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1002 21:55:32.368938       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1002 21:55:32.413083       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:55:32.419887       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:55:32.823729       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 21:55:32.838589       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 21:55:32.839092       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:55:33.929358       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:55:34.014894       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:55:34.117125       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:55:34.197479       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 21:55:34.211405       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1002 21:55:34.216522       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 21:55:34.229597       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:55:35.381691       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:55:35.401128       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 21:55:35.444324       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 21:55:39.614211       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 21:55:40.474938       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:55:40.568759       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [ca31ee49ccc4fbe7d58dd83dbdac382b2a28a07cf993b2639f90078eb03ddbbf] <==
	I1002 21:55:39.165345       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 21:55:39.165665       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 21:55:39.171530       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:55:39.173425       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-842185" podCIDRs=["10.244.0.0/24"]
	I1002 21:55:39.173995       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 21:55:39.184691       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:55:39.185780       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:55:39.195864       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 21:55:39.203422       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 21:55:39.206187       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 21:55:39.208442       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 21:55:39.208543       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 21:55:39.208627       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-842185"
	I1002 21:55:39.208694       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 21:55:39.208723       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 21:55:39.209368       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 21:55:39.209396       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 21:55:39.211035       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 21:55:39.212623       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 21:55:39.212858       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 21:55:39.216347       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 21:55:39.257370       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:55:39.257407       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:55:39.257415       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:56:24.215738       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [267049ec7a288d4ab15e1ff72d5f395b97e4be2657f65435f3415738df43bcb3] <==
	I1002 21:55:41.628048       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:55:42.018642       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:55:42.118959       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:55:42.119013       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 21:55:42.119107       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:55:42.324702       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:55:42.324778       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:55:42.338218       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:55:42.338696       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:55:42.339409       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:55:42.341710       1 config.go:200] "Starting service config controller"
	I1002 21:55:42.341823       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:55:42.341898       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:55:42.341934       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:55:42.341971       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:55:42.342007       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:55:42.351906       1 config.go:309] "Starting node config controller"
	I1002 21:55:42.352024       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:55:42.352057       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:55:42.443760       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:55:42.443793       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:55:42.443840       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f23a5987dbd0a144be659b806ff714c00a6eabde72953fba8929d10149396d2b] <==
	I1002 21:55:32.373923       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:55:32.377659       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:55:32.373946       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 21:55:32.410493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 21:55:32.410844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 21:55:32.413332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 21:55:32.417544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 21:55:32.417812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 21:55:32.434589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 21:55:32.448659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 21:55:32.448758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 21:55:32.448813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 21:55:32.448891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 21:55:32.456161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 21:55:32.456304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 21:55:32.456416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 21:55:32.456517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:55:32.456626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 21:55:32.456744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 21:55:32.456847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 21:55:32.457062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 21:55:32.463180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 21:55:33.324725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 21:55:33.336601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 21:55:36.483981       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 21:55:36 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:55:36.780806    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-842185" podStartSLOduration=1.780790483 podStartE2EDuration="1.780790483s" podCreationTimestamp="2025-10-02 21:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:55:36.734993099 +0000 UTC m=+1.446101138" watchObservedRunningTime="2025-10-02 21:55:36.780790483 +0000 UTC m=+1.491898522"
	Oct 02 21:55:36 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:55:36.833469    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-842185" podStartSLOduration=1.8334499389999999 podStartE2EDuration="1.833449939s" podCreationTimestamp="2025-10-02 21:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:55:36.782145376 +0000 UTC m=+1.493253407" watchObservedRunningTime="2025-10-02 21:55:36.833449939 +0000 UTC m=+1.544557978"
	Oct 02 21:55:39 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:55:39.177909    1317 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 02 21:55:39 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:55:39.179062    1317 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 21:55:41 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:55:41.002619    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0408fba-6828-4f17-beb9-7c9d8c06aadb-xtables-lock\") pod \"kindnet-qb4vm\" (UID: \"a0408fba-6828-4f17-beb9-7c9d8c06aadb\") " pod="kube-system/kindnet-qb4vm"
	Oct 02 21:55:41 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:55:41.002675    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0408fba-6828-4f17-beb9-7c9d8c06aadb-lib-modules\") pod \"kindnet-qb4vm\" (UID: \"a0408fba-6828-4f17-beb9-7c9d8c06aadb\") " pod="kube-system/kindnet-qb4vm"
	Oct 02 21:55:41 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:55:41.002706    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1640af2-e216-46de-9e27-823b1ba83051-xtables-lock\") pod \"kube-proxy-vhggd\" (UID: \"e1640af2-e216-46de-9e27-823b1ba83051\") " pod="kube-system/kube-proxy-vhggd"
	Oct 02 21:55:41 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:55:41.002725    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgzlm\" (UniqueName: \"kubernetes.io/projected/e1640af2-e216-46de-9e27-823b1ba83051-kube-api-access-rgzlm\") pod \"kube-proxy-vhggd\" (UID: \"e1640af2-e216-46de-9e27-823b1ba83051\") " pod="kube-system/kube-proxy-vhggd"
	Oct 02 21:55:41 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:55:41.002748    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a0408fba-6828-4f17-beb9-7c9d8c06aadb-cni-cfg\") pod \"kindnet-qb4vm\" (UID: \"a0408fba-6828-4f17-beb9-7c9d8c06aadb\") " pod="kube-system/kindnet-qb4vm"
	Oct 02 21:55:41 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:55:41.002767    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1640af2-e216-46de-9e27-823b1ba83051-lib-modules\") pod \"kube-proxy-vhggd\" (UID: \"e1640af2-e216-46de-9e27-823b1ba83051\") " pod="kube-system/kube-proxy-vhggd"
	Oct 02 21:55:41 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:55:41.002785    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjfzs\" (UniqueName: \"kubernetes.io/projected/a0408fba-6828-4f17-beb9-7c9d8c06aadb-kube-api-access-tjfzs\") pod \"kindnet-qb4vm\" (UID: \"a0408fba-6828-4f17-beb9-7c9d8c06aadb\") " pod="kube-system/kindnet-qb4vm"
	Oct 02 21:55:41 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:55:41.002805    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e1640af2-e216-46de-9e27-823b1ba83051-kube-proxy\") pod \"kube-proxy-vhggd\" (UID: \"e1640af2-e216-46de-9e27-823b1ba83051\") " pod="kube-system/kube-proxy-vhggd"
	Oct 02 21:55:41 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:55:41.135705    1317 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 21:55:41 default-k8s-diff-port-842185 kubelet[1317]: W1002 21:55:41.228149    1317 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/crio-0058d579f36fa7cac180f3311b33a807deea1bf035510fd11f3d7204ad6910ac WatchSource:0}: Error finding container 0058d579f36fa7cac180f3311b33a807deea1bf035510fd11f3d7204ad6910ac: Status 404 returned error can't find the container with id 0058d579f36fa7cac180f3311b33a807deea1bf035510fd11f3d7204ad6910ac
	Oct 02 21:55:41 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:55:41.876816    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vhggd" podStartSLOduration=1.87679711 podStartE2EDuration="1.87679711s" podCreationTimestamp="2025-10-02 21:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:55:41.768537762 +0000 UTC m=+6.479645801" watchObservedRunningTime="2025-10-02 21:55:41.87679711 +0000 UTC m=+6.587905149"
	Oct 02 21:55:41 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:55:41.876934    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qb4vm" podStartSLOduration=1.876928749 podStartE2EDuration="1.876928749s" podCreationTimestamp="2025-10-02 21:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:55:41.876436529 +0000 UTC m=+6.587544576" watchObservedRunningTime="2025-10-02 21:55:41.876928749 +0000 UTC m=+6.588036788"
	Oct 02 21:56:22 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:56:22.181461    1317 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 02 21:56:22 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:56:22.282440    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dfbacdc1-e0d1-4a90-9786-25439ee46f26-tmp\") pod \"storage-provisioner\" (UID: \"dfbacdc1-e0d1-4a90-9786-25439ee46f26\") " pod="kube-system/storage-provisioner"
	Oct 02 21:56:22 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:56:22.282548    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7ff6f37-0c61-4d47-9268-a767da1b2975-config-volume\") pod \"coredns-66bc5c9577-5hq6c\" (UID: \"f7ff6f37-0c61-4d47-9268-a767da1b2975\") " pod="kube-system/coredns-66bc5c9577-5hq6c"
	Oct 02 21:56:22 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:56:22.282610    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qfzl\" (UniqueName: \"kubernetes.io/projected/dfbacdc1-e0d1-4a90-9786-25439ee46f26-kube-api-access-6qfzl\") pod \"storage-provisioner\" (UID: \"dfbacdc1-e0d1-4a90-9786-25439ee46f26\") " pod="kube-system/storage-provisioner"
	Oct 02 21:56:22 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:56:22.282660    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbjvr\" (UniqueName: \"kubernetes.io/projected/f7ff6f37-0c61-4d47-9268-a767da1b2975-kube-api-access-tbjvr\") pod \"coredns-66bc5c9577-5hq6c\" (UID: \"f7ff6f37-0c61-4d47-9268-a767da1b2975\") " pod="kube-system/coredns-66bc5c9577-5hq6c"
	Oct 02 21:56:22 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:56:22.804283    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5hq6c" podStartSLOduration=42.804262885 podStartE2EDuration="42.804262885s" podCreationTimestamp="2025-10-02 21:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:56:22.774444986 +0000 UTC m=+47.485553025" watchObservedRunningTime="2025-10-02 21:56:22.804262885 +0000 UTC m=+47.515370924"
	Oct 02 21:56:22 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:56:22.820945    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.820916872 podStartE2EDuration="40.820916872s" podCreationTimestamp="2025-10-02 21:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:56:22.804966333 +0000 UTC m=+47.516074364" watchObservedRunningTime="2025-10-02 21:56:22.820916872 +0000 UTC m=+47.532024911"
	Oct 02 21:56:24 default-k8s-diff-port-842185 kubelet[1317]: I1002 21:56:24.905286    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5lpd\" (UniqueName: \"kubernetes.io/projected/df2c8518-e488-49e5-ad02-f5d32c72a262-kube-api-access-r5lpd\") pod \"busybox\" (UID: \"df2c8518-e488-49e5-ad02-f5d32c72a262\") " pod="default/busybox"
	Oct 02 21:56:25 default-k8s-diff-port-842185 kubelet[1317]: W1002 21:56:25.233772    1317 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/crio-e7e34b4a5025b0da73147b0aaeece63fe65cc6db952c923284c981761eb4e800 WatchSource:0}: Error finding container e7e34b4a5025b0da73147b0aaeece63fe65cc6db952c923284c981761eb4e800: Status 404 returned error can't find the container with id e7e34b4a5025b0da73147b0aaeece63fe65cc6db952c923284c981761eb4e800
	
	
	==> storage-provisioner [e255be7d8f7ba26fe6f11b3df4d3b7652fb779f02c391178f2492b1ea8c0aa40] <==
	I1002 21:56:22.619605       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 21:56:22.646075       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 21:56:22.646569       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 21:56:22.650352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:22.657156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:56:22.657408       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:56:22.658147       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8b17831-0c1d-4950-9708-ff3cf4191d2a", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-842185_18d9ab69-4e81-4fb2-8cff-10f72d6343d4 became leader
	W1002 21:56:22.671084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:56:22.671168       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-842185_18d9ab69-4e81-4fb2-8cff-10f72d6343d4!
	W1002 21:56:22.697634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:56:22.777986       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-842185_18d9ab69-4e81-4fb2-8cff-10f72d6343d4!
	W1002 21:56:24.714720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:24.730371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:26.733350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:26.741075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:28.744331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:28.748880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:30.752083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:30.758072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:32.761406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:32.771177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:34.775962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:34.788834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:36.795889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:56:36.802182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-842185 -n default-k8s-diff-port-842185
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-842185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (4.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-161621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-161621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (347.453254ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:57:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-161621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-161621
helpers_test.go:243: (dbg) docker inspect newest-cni-161621:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad",
	        "Created": "2025-10-02T21:56:36.536154593Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1204355,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:56:36.603343941Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad/hostname",
	        "HostsPath": "/var/lib/docker/containers/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad/hosts",
	        "LogPath": "/var/lib/docker/containers/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad-json.log",
	        "Name": "/newest-cni-161621",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-161621:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-161621",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad",
	                "LowerDir": "/var/lib/docker/overlay2/82468ade2c550bcf18f9beb969d80c75d06c3a36df17c029f9f82c0a2a5dab59-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/82468ade2c550bcf18f9beb969d80c75d06c3a36df17c029f9f82c0a2a5dab59/merged",
	                "UpperDir": "/var/lib/docker/overlay2/82468ade2c550bcf18f9beb969d80c75d06c3a36df17c029f9f82c0a2a5dab59/diff",
	                "WorkDir": "/var/lib/docker/overlay2/82468ade2c550bcf18f9beb969d80c75d06c3a36df17c029f9f82c0a2a5dab59/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-161621",
	                "Source": "/var/lib/docker/volumes/newest-cni-161621/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-161621",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-161621",
	                "name.minikube.sigs.k8s.io": "newest-cni-161621",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d34c84f64895470d03051ce31a7be0e896743b91e8f8ad47c7c91ee0ac632f96",
	            "SandboxKey": "/var/run/docker/netns/d34c84f64895",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34216"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34217"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34220"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34218"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34219"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-161621": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:58:9d:f5:cb:3a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "155fa7e09312067d402503432852b0577257ec8e8857e0c34bee66c5c9279cb6",
	                    "EndpointID": "b25752699041d2c7e8fece08ff1fdf10a20ab3cde3e5af2c6339216de05e8c65",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-161621",
	                        "4274608d314f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-161621 -n newest-cni-161621
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-161621 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-161621 logs -n 25: (1.484292293s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-661954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │                     │
	│ stop    │ -p no-preload-661954 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ addons  │ enable dashboard -p no-preload-661954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	│ start   │ -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:54 UTC │
	│ image   │ no-preload-661954 image list --format=json                                                                                                                                                                                                    │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ pause   │ -p no-preload-661954 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-132977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	│ delete  │ -p no-preload-661954                                                                                                                                                                                                                          │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ stop    │ -p embed-certs-132977 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:55 UTC │
	│ delete  │ -p no-preload-661954                                                                                                                                                                                                                          │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ delete  │ -p disable-driver-mounts-013352                                                                                                                                                                                                               │ disable-driver-mounts-013352 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ start   │ -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-132977 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:55 UTC │
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:56 UTC │
	│ image   │ embed-certs-132977 image list --format=json                                                                                                                                                                                                   │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ pause   │ -p embed-certs-132977 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	│ delete  │ -p embed-certs-132977                                                                                                                                                                                                                         │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ delete  │ -p embed-certs-132977                                                                                                                                                                                                                         │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ start   │ -p newest-cni-161621 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-842185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-842185 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-842185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ start   │ -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-161621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:56:50
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:56:50.595338 1206253 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:56:50.595974 1206253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:56:50.596011 1206253 out.go:374] Setting ErrFile to fd 2...
	I1002 21:56:50.596032 1206253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:56:50.596360 1206253 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:56:50.596815 1206253 out.go:368] Setting JSON to false
	I1002 21:56:50.597816 1206253 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23948,"bootTime":1759418263,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:56:50.597912 1206253 start.go:140] virtualization:  
	I1002 21:56:50.602999 1206253 out.go:179] * [default-k8s-diff-port-842185] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:56:50.606212 1206253 notify.go:221] Checking for updates...
	I1002 21:56:50.606786 1206253 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:56:50.610140 1206253 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:56:50.613016 1206253 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:56:50.615943 1206253 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:56:50.618823 1206253 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:56:50.622570 1206253 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:56:50.626118 1206253 config.go:182] Loaded profile config "default-k8s-diff-port-842185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:56:50.626755 1206253 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:56:50.664037 1206253 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:56:50.664157 1206253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:56:50.759259 1206253 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:56:50.748543729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:56:50.759365 1206253 docker.go:319] overlay module found
	I1002 21:56:50.762355 1206253 out.go:179] * Using the docker driver based on existing profile
	I1002 21:56:50.765097 1206253 start.go:306] selected driver: docker
	I1002 21:56:50.765111 1206253 start.go:936] validating driver "docker" against &{Name:default-k8s-diff-port-842185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842185 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:56:50.765201 1206253 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:56:50.765869 1206253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:56:50.874389 1206253 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:56:50.865334177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:56:50.874710 1206253 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:56:50.874736 1206253 cni.go:84] Creating CNI manager for ""
	I1002 21:56:50.874794 1206253 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:56:50.874827 1206253 start.go:350] cluster config:
	{Name:default-k8s-diff-port-842185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:56:50.877926 1206253 out.go:179] * Starting "default-k8s-diff-port-842185" primary control-plane node in "default-k8s-diff-port-842185" cluster
	I1002 21:56:50.880697 1206253 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:56:50.883583 1206253 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:56:50.886505 1206253 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:56:50.886569 1206253 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:56:50.886580 1206253 cache.go:59] Caching tarball of preloaded images
	I1002 21:56:50.886661 1206253 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:56:50.886671 1206253 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:56:50.886786 1206253 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/config.json ...
	I1002 21:56:50.886993 1206253 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:56:50.912756 1206253 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:56:50.912774 1206253 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:56:50.912786 1206253 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:56:50.912809 1206253 start.go:361] acquireMachinesLock for default-k8s-diff-port-842185: {Name:mkfb55a0d771815f9e4a8a414bd4a3a0909f4b93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:56:50.912857 1206253 start.go:365] duration metric: took 31.926µs to acquireMachinesLock for "default-k8s-diff-port-842185"
	I1002 21:56:50.912876 1206253 start.go:97] Skipping create...Using existing machine configuration
	I1002 21:56:50.912881 1206253 fix.go:55] fixHost starting: 
	I1002 21:56:50.913164 1206253 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:56:50.946160 1206253 fix.go:113] recreateIfNeeded on default-k8s-diff-port-842185: state=Stopped err=<nil>
	W1002 21:56:50.946220 1206253 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 21:56:50.949397 1206253 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-842185" ...
	I1002 21:56:50.949485 1206253 cli_runner.go:164] Run: docker start default-k8s-diff-port-842185
	I1002 21:56:51.300114 1206253 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:56:51.329174 1206253 kic.go:430] container "default-k8s-diff-port-842185" state is running.
	I1002 21:56:51.330346 1206253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-842185
	I1002 21:56:51.364504 1206253 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/config.json ...
	I1002 21:56:51.364729 1206253 machine.go:93] provisionDockerMachine start ...
	I1002 21:56:51.364795 1206253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:56:51.400427 1206253 main.go:141] libmachine: Using SSH client type: native
	I1002 21:56:51.400758 1206253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34221 <nil> <nil>}
	I1002 21:56:51.400775 1206253 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:56:51.401687 1206253 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 21:56:54.573760 1206253 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-842185
	
	I1002 21:56:54.573857 1206253 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-842185"
	I1002 21:56:54.573969 1206253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:56:54.613035 1206253 main.go:141] libmachine: Using SSH client type: native
	I1002 21:56:54.613362 1206253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34221 <nil> <nil>}
	I1002 21:56:54.613375 1206253 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-842185 && echo "default-k8s-diff-port-842185" | sudo tee /etc/hostname
	I1002 21:56:54.823443 1206253 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-842185
	
	I1002 21:56:54.823584 1206253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:56:54.869016 1206253 main.go:141] libmachine: Using SSH client type: native
	I1002 21:56:54.869330 1206253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34221 <nil> <nil>}
	I1002 21:56:54.869349 1206253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-842185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-842185/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-842185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:56:55.038457 1206253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:56:55.038487 1206253 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:56:55.038567 1206253 ubuntu.go:190] setting up certificates
	I1002 21:56:55.038579 1206253 provision.go:84] configureAuth start
	I1002 21:56:55.038680 1206253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-842185
	I1002 21:56:55.072321 1206253 provision.go:143] copyHostCerts
	I1002 21:56:55.072399 1206253 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:56:55.072422 1206253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:56:55.072505 1206253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:56:55.072614 1206253 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:56:55.072631 1206253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:56:55.072684 1206253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:56:55.072754 1206253 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:56:55.072767 1206253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:56:55.072793 1206253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:56:55.072854 1206253 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-842185 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-842185 localhost minikube]
	I1002 21:56:55.792390 1206253 provision.go:177] copyRemoteCerts
	I1002 21:56:55.792510 1206253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:56:55.792567 1206253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:56:55.810087 1206253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34221 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:56:55.918310 1206253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:56:55.943108 1206253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1002 21:56:55.962773 1206253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:56:55.989450 1206253 provision.go:87] duration metric: took 950.843509ms to configureAuth
	I1002 21:56:55.989479 1206253 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:56:55.989679 1206253 config.go:182] Loaded profile config "default-k8s-diff-port-842185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:56:55.989786 1206253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:56:56.034263 1206253 main.go:141] libmachine: Using SSH client type: native
	I1002 21:56:56.034597 1206253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34221 <nil> <nil>}
	I1002 21:56:56.034615 1206253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:56:56.485797 1206253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:56:56.485888 1206253 machine.go:96] duration metric: took 5.121145681s to provisionDockerMachine
	I1002 21:56:56.485914 1206253 start.go:294] postStartSetup for "default-k8s-diff-port-842185" (driver="docker")
	I1002 21:56:56.485949 1206253 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:56:56.486125 1206253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:56:56.486205 1206253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:56:56.515630 1206253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34221 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:56:56.624197 1206253 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:56:56.630637 1206253 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:56:56.630662 1206253 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:56:56.630673 1206253 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:56:56.630727 1206253 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:56:56.630806 1206253 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:56:56.630916 1206253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:56:56.647418 1206253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:56:56.681818 1206253 start.go:297] duration metric: took 195.864826ms for postStartSetup
	I1002 21:56:56.681947 1206253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:56:56.682026 1206253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:56:56.715658 1206253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34221 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:56:56.826411 1206253 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:56:56.831199 1206253 fix.go:57] duration metric: took 5.918310569s for fixHost
	I1002 21:56:56.831226 1206253 start.go:84] releasing machines lock for "default-k8s-diff-port-842185", held for 5.91836094s
	I1002 21:56:56.831294 1206253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-842185
	I1002 21:56:56.862646 1206253 ssh_runner.go:195] Run: cat /version.json
	I1002 21:56:56.862714 1206253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:56:56.862968 1206253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:56:56.863023 1206253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:56:56.900706 1206253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34221 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:56:56.902204 1206253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34221 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:56:57.127516 1206253 ssh_runner.go:195] Run: systemctl --version
	I1002 21:56:57.138381 1206253 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:56:57.200476 1206253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:56:57.205371 1206253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:56:57.205484 1206253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:56:57.217203 1206253 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:56:57.217265 1206253 start.go:496] detecting cgroup driver to use...
	I1002 21:56:57.217310 1206253 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:56:57.217371 1206253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:56:57.237465 1206253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:56:57.259832 1206253 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:56:57.259946 1206253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:56:57.288648 1206253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:56:57.310435 1206253 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:56:57.523559 1206253 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:56:57.708852 1206253 docker.go:234] disabling docker service ...
	I1002 21:56:57.708980 1206253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:56:57.736082 1206253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:56:57.764820 1206253 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:56:57.948973 1206253 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:56:58.133670 1206253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:56:58.155461 1206253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:56:58.180341 1206253 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:56:58.180470 1206253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:56:58.202441 1206253 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:56:58.202591 1206253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:56:58.212589 1206253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:56:58.227246 1206253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:56:58.236691 1206253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:56:58.250915 1206253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:56:58.264525 1206253 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:56:58.276250 1206253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:56:58.289551 1206253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:56:58.298917 1206253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:56:58.310142 1206253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:56:58.494564 1206253 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:56:58.706746 1206253 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:56:58.706860 1206253 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:56:58.713840 1206253 start.go:564] Will wait 60s for crictl version
	I1002 21:56:58.713956 1206253 ssh_runner.go:195] Run: which crictl
	I1002 21:56:58.718553 1206253 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:56:58.767825 1206253 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:56:58.767967 1206253 ssh_runner.go:195] Run: crio --version
	I1002 21:56:58.824515 1206253 ssh_runner.go:195] Run: crio --version
	I1002 21:56:58.888561 1206253 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:56:58.891360 1206253 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-842185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:56:58.918314 1206253 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 21:56:58.922450 1206253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:56:58.943333 1206253 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-842185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842185 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:56:58.943475 1206253 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:56:58.943528 1206253 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:56:59.002958 1206253 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:56:59.003038 1206253 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:56:59.003113 1206253 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:56:59.067655 1206253 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:56:59.067676 1206253 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:56:59.067684 1206253 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1002 21:56:59.067787 1206253 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-842185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:56:59.067861 1206253 ssh_runner.go:195] Run: crio config
	I1002 21:56:59.189299 1206253 cni.go:84] Creating CNI manager for ""
	I1002 21:56:59.189361 1206253 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:56:59.189396 1206253 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:56:59.189442 1206253 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-842185 NodeName:default-k8s-diff-port-842185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:56:59.189587 1206253 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-842185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:56:59.189686 1206253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:56:59.203037 1206253 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:56:59.203147 1206253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:56:59.211088 1206253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 21:56:59.235393 1206253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:56:59.252713 1206253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1002 21:56:59.266931 1206253 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:56:59.270799 1206253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:56:59.281037 1206253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:56:59.472829 1206253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:56:59.514511 1206253 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185 for IP: 192.168.85.2
	I1002 21:56:59.514534 1206253 certs.go:195] generating shared ca certs ...
	I1002 21:56:59.514552 1206253 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:56:59.514695 1206253 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:56:59.514747 1206253 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:56:59.514759 1206253 certs.go:257] generating profile certs ...
	I1002 21:56:59.514850 1206253 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.key
	I1002 21:56:59.514918 1206253 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.key.af0db507
	I1002 21:56:59.514963 1206253 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.key
	I1002 21:56:59.515078 1206253 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:56:59.515112 1206253 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:56:59.515126 1206253 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:56:59.515152 1206253 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:56:59.515178 1206253 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:56:59.515203 1206253 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:56:59.515247 1206253 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:56:59.515854 1206253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:56:59.556177 1206253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:56:59.595337 1206253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:56:59.638440 1206253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:56:59.697017 1206253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 21:56:59.739114 1206253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:56:59.790237 1206253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:56:59.828276 1206253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:56:59.855961 1206253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:56:59.892022 1206253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:56:59.926541 1206253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:56:59.984660 1206253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:57:00.013310 1206253 ssh_runner.go:195] Run: openssl version
	I1002 21:57:00.033728 1206253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:57:00.049517 1206253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:57:00.059282 1206253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:57:00.059433 1206253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:57:00.135127 1206253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:57:00.163442 1206253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:57:00.185758 1206253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:57:00.215564 1206253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:57:00.215757 1206253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:57:00.370660 1206253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:57:00.382731 1206253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:57:00.396482 1206253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:57:00.404102 1206253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:57:00.404246 1206253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:57:00.476573 1206253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:57:00.496757 1206253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:57:00.515305 1206253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:57:00.631288 1206253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:57:00.780184 1206253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:57:00.950598 1206253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:57:01.074615 1206253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:57:01.166788 1206253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:57:01.267670 1206253 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-842185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842185 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:57:01.267826 1206253 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:57:01.267930 1206253 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:57:01.336598 1206253 cri.go:89] found id: "73e02330ac81eb5bb08428b429c32a892fcf1a19e76b8a5bcd1f6bca9df0bb91"
	I1002 21:57:01.336665 1206253 cri.go:89] found id: "bcc8f7a1359466ea9b86d847aaee545ff42e89546c49da9e5a234add61b6e16c"
	I1002 21:57:01.336686 1206253 cri.go:89] found id: "e26cd38eb38f190780ac20a4962aaad22e5e6f78b418d8570ba1272bc0e6fcd1"
	I1002 21:57:01.336708 1206253 cri.go:89] found id: "e2012db846f04fedc4a91921d9a9ee6083af8a50e9ac894b9c0ef5f131609e50"
	I1002 21:57:01.336744 1206253 cri.go:89] found id: ""
	I1002 21:57:01.336816 1206253 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 21:57:01.355677 1206253 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:57:01Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:57:01.355838 1206253 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:57:01.371888 1206253 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:57:01.371946 1206253 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:57:01.372024 1206253 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:57:01.385199 1206253 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:57:01.385673 1206253 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-842185" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:57:01.385830 1206253 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-992084/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-842185" cluster setting kubeconfig missing "default-k8s-diff-port-842185" context setting]
	I1002 21:57:01.386218 1206253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:01.387533 1206253 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:57:01.411273 1206253 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 21:57:01.411347 1206253 kubeadm.go:601] duration metric: took 39.381488ms to restartPrimaryControlPlane
	I1002 21:57:01.411371 1206253 kubeadm.go:402] duration metric: took 143.711932ms to StartCluster
	I1002 21:57:01.411413 1206253 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:01.411489 1206253 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:57:01.412199 1206253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:01.412450 1206253 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:57:01.412784 1206253 config.go:182] Loaded profile config "default-k8s-diff-port-842185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:57:01.412858 1206253 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:57:01.412959 1206253 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-842185"
	I1002 21:57:01.413074 1206253 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-842185"
	W1002 21:57:01.413102 1206253 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:57:01.413140 1206253 host.go:66] Checking if "default-k8s-diff-port-842185" exists ...
	I1002 21:57:01.413655 1206253 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:57:01.412989 1206253 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-842185"
	I1002 21:57:01.414381 1206253 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-842185"
	W1002 21:57:01.414390 1206253 addons.go:247] addon dashboard should already be in state true
	I1002 21:57:01.414411 1206253 host.go:66] Checking if "default-k8s-diff-port-842185" exists ...
	I1002 21:57:01.414805 1206253 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:57:01.412997 1206253 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-842185"
	I1002 21:57:01.415052 1206253 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-842185"
	I1002 21:57:01.415345 1206253 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:57:01.418317 1206253 out.go:179] * Verifying Kubernetes components...
	I1002 21:57:01.426208 1206253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:57:01.465925 1206253 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 21:57:01.470166 1206253 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 21:57:01.474293 1206253 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 21:57:01.474316 1206253 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 21:57:01.474397 1206253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:57:01.475149 1206253 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-842185"
	W1002 21:57:01.475165 1206253 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:57:01.475188 1206253 host.go:66] Checking if "default-k8s-diff-port-842185" exists ...
	I1002 21:57:01.475612 1206253 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:57:01.481138 1206253 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:57:01.484052 1206253 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:57:01.484075 1206253 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:57:01.484138 1206253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:57:01.535021 1206253 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:57:01.535042 1206253 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:57:01.535103 1206253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:57:01.538367 1206253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34221 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:57:01.551295 1206253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34221 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:57:01.584539 1206253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34221 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:57:01.874539 1206253 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 21:57:01.874567 1206253 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 21:57:01.900788 1206253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:57:01.983430 1206253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:57:01.993513 1206253 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 21:57:01.993542 1206253 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 21:57:01.999686 1206253 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-842185" to be "Ready" ...
	I1002 21:57:02.044798 1206253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:57:02.147934 1206253 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 21:57:02.147963 1206253 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 21:57:02.295365 1206253 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 21:57:02.295397 1206253 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 21:57:02.398608 1206253 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 21:57:02.398646 1206253 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 21:57:02.474503 1206253 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 21:57:02.474534 1206253 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 21:57:02.567562 1206253 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 21:57:02.567606 1206253 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 21:57:02.625916 1206253 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 21:57:02.625941 1206253 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 21:57:02.682490 1206253 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:57:02.682517 1206253 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 21:57:02.734482 1206253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:57:06.918062 1203635 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:57:06.918122 1203635 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:57:06.918214 1203635 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:57:06.918272 1203635 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:57:06.918307 1203635 kubeadm.go:318] OS: Linux
	I1002 21:57:06.918354 1203635 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:57:06.918404 1203635 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:57:06.918453 1203635 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:57:06.918504 1203635 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:57:06.918554 1203635 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:57:06.918604 1203635 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:57:06.918653 1203635 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:57:06.918703 1203635 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:57:06.918751 1203635 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:57:06.918826 1203635 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:57:06.918923 1203635 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:57:06.919016 1203635 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:57:06.919081 1203635 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:57:06.922296 1203635 out.go:252]   - Generating certificates and keys ...
	I1002 21:57:06.922403 1203635 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:57:06.922471 1203635 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:57:06.922541 1203635 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:57:06.922600 1203635 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:57:06.922663 1203635 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:57:06.922715 1203635 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:57:06.922772 1203635 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:57:06.922898 1203635 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-161621] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 21:57:06.922952 1203635 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:57:06.923081 1203635 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-161621] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 21:57:06.923155 1203635 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:57:06.923221 1203635 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:57:06.923268 1203635 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:57:06.923326 1203635 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:57:06.923379 1203635 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:57:06.923437 1203635 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:57:06.923496 1203635 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:57:06.923562 1203635 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:57:06.923628 1203635 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:57:06.923713 1203635 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:57:06.923782 1203635 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:57:06.926960 1203635 out.go:252]   - Booting up control plane ...
	I1002 21:57:06.927143 1203635 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:57:06.927283 1203635 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:57:06.927401 1203635 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:57:06.927563 1203635 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:57:06.927681 1203635 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:57:06.927798 1203635 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:57:06.927893 1203635 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:57:06.927937 1203635 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:57:06.928082 1203635 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:57:06.928199 1203635 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:57:06.928265 1203635 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501530534s
	I1002 21:57:06.928367 1203635 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:57:06.928457 1203635 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 21:57:06.928557 1203635 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:57:06.928645 1203635 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:57:06.928729 1203635 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 8.341286699s
	I1002 21:57:06.928805 1203635 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 11.993547103s
	I1002 21:57:06.928881 1203635 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 12.021254075s
	I1002 21:57:06.929000 1203635 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:57:06.929139 1203635 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:57:06.929216 1203635 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:57:06.929422 1203635 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-161621 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 21:57:06.929485 1203635 kubeadm.go:318] [bootstrap-token] Using token: q1xzf1.ds9up8xbk91i57cp
	I1002 21:57:06.932389 1203635 out.go:252]   - Configuring RBAC rules ...
	I1002 21:57:06.932587 1203635 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:57:06.932725 1203635 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:57:06.932930 1203635 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:57:06.933122 1203635 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:57:06.933285 1203635 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:57:06.933424 1203635 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:57:06.933633 1203635 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:57:06.933737 1203635 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 21:57:06.933818 1203635 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 21:57:06.933835 1203635 kubeadm.go:318] 
	I1002 21:57:06.933899 1203635 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 21:57:06.933910 1203635 kubeadm.go:318] 
	I1002 21:57:06.933990 1203635 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 21:57:06.933999 1203635 kubeadm.go:318] 
	I1002 21:57:06.934026 1203635 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 21:57:06.934151 1203635 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:57:06.934208 1203635 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:57:06.934215 1203635 kubeadm.go:318] 
	I1002 21:57:06.934271 1203635 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 21:57:06.934283 1203635 kubeadm.go:318] 
	I1002 21:57:06.934333 1203635 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 21:57:06.934344 1203635 kubeadm.go:318] 
	I1002 21:57:06.934398 1203635 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 21:57:06.934483 1203635 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:57:06.934567 1203635 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:57:06.934577 1203635 kubeadm.go:318] 
	I1002 21:57:06.934666 1203635 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:57:06.934750 1203635 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 21:57:06.934758 1203635 kubeadm.go:318] 
	I1002 21:57:06.934849 1203635 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token q1xzf1.ds9up8xbk91i57cp \
	I1002 21:57:06.934960 1203635 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 \
	I1002 21:57:06.934984 1203635 kubeadm.go:318] 	--control-plane 
	I1002 21:57:06.934988 1203635 kubeadm.go:318] 
	I1002 21:57:06.935079 1203635 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:57:06.935087 1203635 kubeadm.go:318] 
	I1002 21:57:06.935172 1203635 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token q1xzf1.ds9up8xbk91i57cp \
	I1002 21:57:06.935283 1203635 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:75ca26c9d3051037297562a8bb0d2c9a64d1592a72edb7d1afaa2fbd73c169e5 
	I1002 21:57:06.935310 1203635 cni.go:84] Creating CNI manager for ""
	I1002 21:57:06.935321 1203635 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:57:06.941851 1203635 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 21:57:10.494365 1206253 node_ready.go:49] node "default-k8s-diff-port-842185" is "Ready"
	I1002 21:57:10.494400 1206253 node_ready.go:38] duration metric: took 8.494671383s for node "default-k8s-diff-port-842185" to be "Ready" ...
	I1002 21:57:10.494414 1206253 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:57:10.494473 1206253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:57:06.944030 1203635 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:57:06.950728 1203635 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 21:57:06.950751 1203635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 21:57:06.997459 1203635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:57:07.569440 1203635 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:57:07.569581 1203635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:57:07.569647 1203635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-161621 minikube.k8s.io/updated_at=2025_10_02T21_57_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b minikube.k8s.io/name=newest-cni-161621 minikube.k8s.io/primary=true
	I1002 21:57:07.914100 1203635 ops.go:34] apiserver oom_adj: -16
	I1002 21:57:07.914211 1203635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:57:08.414602 1203635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:57:08.914358 1203635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:57:09.415056 1203635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:57:09.914830 1203635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:57:10.415017 1203635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:57:10.914888 1203635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:57:11.415116 1203635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:57:11.915291 1203635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:57:12.415168 1203635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:57:12.751056 1203635 kubeadm.go:1113] duration metric: took 5.181525559s to wait for elevateKubeSystemPrivileges
	I1002 21:57:12.751085 1203635 kubeadm.go:402] duration metric: took 28.674807542s to StartCluster
	I1002 21:57:12.751102 1203635 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:12.751164 1203635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:57:12.752146 1203635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:12.752372 1203635 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:57:12.752461 1203635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 21:57:12.752713 1203635 config.go:182] Loaded profile config "newest-cni-161621": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:57:12.752755 1203635 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:57:12.752820 1203635 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-161621"
	I1002 21:57:12.752834 1203635 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-161621"
	I1002 21:57:12.752857 1203635 host.go:66] Checking if "newest-cni-161621" exists ...
	I1002 21:57:12.753532 1203635 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:12.753626 1203635 addons.go:69] Setting default-storageclass=true in profile "newest-cni-161621"
	I1002 21:57:12.753650 1203635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-161621"
	I1002 21:57:12.753886 1203635 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:12.756462 1203635 out.go:179] * Verifying Kubernetes components...
	I1002 21:57:12.759657 1203635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:57:12.796106 1203635 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:57:13.085208 1206253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.10174105s)
	I1002 21:57:13.085269 1206253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.040446865s)
	I1002 21:57:13.085535 1206253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.351005511s)
	I1002 21:57:13.085669 1206253 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.591181255s)
	I1002 21:57:13.085692 1206253 api_server.go:72] duration metric: took 11.67319485s to wait for apiserver process to appear ...
	I1002 21:57:13.085699 1206253 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:57:13.085717 1206253 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1002 21:57:13.090254 1206253 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-842185 addons enable metrics-server
	
	I1002 21:57:13.096508 1206253 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1002 21:57:12.799135 1203635 addons.go:238] Setting addon default-storageclass=true in "newest-cni-161621"
	I1002 21:57:12.799175 1203635 host.go:66] Checking if "newest-cni-161621" exists ...
	I1002 21:57:12.799603 1203635 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:12.799893 1203635 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:57:12.799906 1203635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:57:12.799950 1203635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:12.848740 1203635 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:57:12.848762 1203635 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:57:12.848827 1203635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:12.850225 1203635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34216 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:12.874758 1203635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34216 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:13.311951 1203635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:57:13.411744 1203635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 21:57:13.411864 1203635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:57:13.452925 1203635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:57:14.107356 1203635 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1002 21:57:14.108372 1203635 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:57:14.108469 1203635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:57:14.135637 1203635 api_server.go:72] duration metric: took 1.383227665s to wait for apiserver process to appear ...
	I1002 21:57:14.135658 1203635 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:57:14.135674 1203635 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:57:14.153107 1203635 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 21:57:14.154749 1203635 api_server.go:141] control plane version: v1.34.1
	I1002 21:57:14.154824 1203635 api_server.go:131] duration metric: took 19.159004ms to wait for apiserver health ...
	I1002 21:57:14.154850 1203635 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:57:14.157231 1203635 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 21:57:14.158419 1203635 system_pods.go:59] 9 kube-system pods found
	I1002 21:57:14.158462 1203635 system_pods.go:61] "coredns-66bc5c9577-ghqbc" [7b8ea7b9-de16-4d0d-bdaa-a6bc01e910ce] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 21:57:14.158476 1203635 system_pods.go:61] "coredns-66bc5c9577-ntw6h" [970083c8-716b-436e-bc93-a888bb56d5d7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 21:57:14.158488 1203635 system_pods.go:61] "etcd-newest-cni-161621" [7ea80e5a-23cd-483a-915a-1e4d15b007d4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:57:14.158494 1203635 system_pods.go:61] "kindnet-49wbb" [7efc7ea2-18d3-49a1-a3eb-1c2767978396] Running
	I1002 21:57:14.158504 1203635 system_pods.go:61] "kube-apiserver-newest-cni-161621" [d07ea3be-9874-47c6-ba52-41218649668a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:57:14.158511 1203635 system_pods.go:61] "kube-controller-manager-newest-cni-161621" [7ae67be3-2685-4cae-8d49-9afaee6a47d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:57:14.158531 1203635 system_pods.go:61] "kube-proxy-dgplp" [b7a0b577-70f7-44ba-8990-3694f9fcc965] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 21:57:14.158545 1203635 system_pods.go:61] "kube-scheduler-newest-cni-161621" [1e8d8f40-f980-4e6d-b889-213f346a7488] Running
	I1002 21:57:14.158551 1203635 system_pods.go:61] "storage-provisioner" [410ea72e-053b-4627-bd9d-8182cc23f02a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 21:57:14.158557 1203635 system_pods.go:74] duration metric: took 3.688354ms to wait for pod list to return data ...
	I1002 21:57:14.158566 1203635 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:57:14.160596 1203635 addons.go:514] duration metric: took 1.407825862s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 21:57:14.170876 1203635 default_sa.go:45] found service account: "default"
	I1002 21:57:14.170945 1203635 default_sa.go:55] duration metric: took 12.365436ms for default service account to be created ...
	I1002 21:57:14.170972 1203635 kubeadm.go:586] duration metric: took 1.418567643s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 21:57:14.171001 1203635 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:57:14.174710 1203635 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:57:14.174745 1203635 node_conditions.go:123] node cpu capacity is 2
	I1002 21:57:14.174759 1203635 node_conditions.go:105] duration metric: took 3.72375ms to run NodePressure ...
	I1002 21:57:14.174773 1203635 start.go:242] waiting for startup goroutines ...
	I1002 21:57:14.611755 1203635 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-161621" context rescaled to 1 replicas
	I1002 21:57:14.611830 1203635 start.go:247] waiting for cluster config update ...
	I1002 21:57:14.611857 1203635 start.go:256] writing updated cluster config ...
	I1002 21:57:14.612189 1203635 ssh_runner.go:195] Run: rm -f paused
	I1002 21:57:14.690573 1203635 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:57:14.694374 1203635 out.go:179] * Done! kubectl is now configured to use "newest-cni-161621" cluster and "default" namespace by default
	I1002 21:57:13.099259 1206253 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1002 21:57:13.099598 1206253 addons.go:514] duration metric: took 11.686721289s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1002 21:57:13.100791 1206253 api_server.go:141] control plane version: v1.34.1
	I1002 21:57:13.100815 1206253 api_server.go:131] duration metric: took 15.107131ms to wait for apiserver health ...
	I1002 21:57:13.100824 1206253 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:57:13.104835 1206253 system_pods.go:59] 8 kube-system pods found
	I1002 21:57:13.104926 1206253 system_pods.go:61] "coredns-66bc5c9577-5hq6c" [f7ff6f37-0c61-4d47-9268-a767da1b2975] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:57:13.104955 1206253 system_pods.go:61] "etcd-default-k8s-diff-port-842185" [2583503c-a0ec-4ccf-a798-8c474b2d2ca0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:57:13.104979 1206253 system_pods.go:61] "kindnet-qb4vm" [a0408fba-6828-4f17-beb9-7c9d8c06aadb] Running
	I1002 21:57:13.105005 1206253 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-842185" [639f7e81-9c1b-4b09-b244-f84494a340da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:57:13.105045 1206253 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-842185" [37104cf8-9137-492e-984f-c242ceb7c6cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:57:13.105077 1206253 system_pods.go:61] "kube-proxy-vhggd" [e1640af2-e216-46de-9e27-823b1ba83051] Running
	I1002 21:57:13.105102 1206253 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-842185" [bd62d81a-1b1b-4645-b66e-55cc7f7cc002] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:57:13.105122 1206253 system_pods.go:61] "storage-provisioner" [dfbacdc1-e0d1-4a90-9786-25439ee46f26] Running
	I1002 21:57:13.105153 1206253 system_pods.go:74] duration metric: took 4.322742ms to wait for pod list to return data ...
	I1002 21:57:13.105180 1206253 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:57:13.117208 1206253 default_sa.go:45] found service account: "default"
	I1002 21:57:13.117283 1206253 default_sa.go:55] duration metric: took 12.082521ms for default service account to be created ...
	I1002 21:57:13.117308 1206253 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:57:13.124459 1206253 system_pods.go:86] 8 kube-system pods found
	I1002 21:57:13.124543 1206253 system_pods.go:89] "coredns-66bc5c9577-5hq6c" [f7ff6f37-0c61-4d47-9268-a767da1b2975] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:57:13.124568 1206253 system_pods.go:89] "etcd-default-k8s-diff-port-842185" [2583503c-a0ec-4ccf-a798-8c474b2d2ca0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:57:13.124605 1206253 system_pods.go:89] "kindnet-qb4vm" [a0408fba-6828-4f17-beb9-7c9d8c06aadb] Running
	I1002 21:57:13.124634 1206253 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-842185" [639f7e81-9c1b-4b09-b244-f84494a340da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:57:13.124659 1206253 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-842185" [37104cf8-9137-492e-984f-c242ceb7c6cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:57:13.124678 1206253 system_pods.go:89] "kube-proxy-vhggd" [e1640af2-e216-46de-9e27-823b1ba83051] Running
	I1002 21:57:13.124716 1206253 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-842185" [bd62d81a-1b1b-4645-b66e-55cc7f7cc002] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:57:13.124742 1206253 system_pods.go:89] "storage-provisioner" [dfbacdc1-e0d1-4a90-9786-25439ee46f26] Running
	I1002 21:57:13.124765 1206253 system_pods.go:126] duration metric: took 7.437113ms to wait for k8s-apps to be running ...
	I1002 21:57:13.124788 1206253 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:57:13.124872 1206253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:57:13.141573 1206253 system_svc.go:56] duration metric: took 16.77574ms WaitForService to wait for kubelet
	I1002 21:57:13.141643 1206253 kubeadm.go:586] duration metric: took 11.729143632s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:57:13.141677 1206253 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:57:13.147544 1206253 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:57:13.147639 1206253 node_conditions.go:123] node cpu capacity is 2
	I1002 21:57:13.147668 1206253 node_conditions.go:105] duration metric: took 5.967605ms to run NodePressure ...
	I1002 21:57:13.147707 1206253 start.go:242] waiting for startup goroutines ...
	I1002 21:57:13.147732 1206253 start.go:247] waiting for cluster config update ...
	I1002 21:57:13.147758 1206253 start.go:256] writing updated cluster config ...
	I1002 21:57:13.148069 1206253 ssh_runner.go:195] Run: rm -f paused
	I1002 21:57:13.154572 1206253 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:57:13.158728 1206253 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5hq6c" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 21:57:15.168244 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 02 21:57:13 newest-cni-161621 crio[836]: time="2025-10-02T21:57:13.070844832Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:13 newest-cni-161621 crio[836]: time="2025-10-02T21:57:13.084460655Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e93274e2-a0d4-4c06-a8c2-dd4f88b47b3a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:57:13 newest-cni-161621 crio[836]: time="2025-10-02T21:57:13.140318403Z" level=info msg="Ran pod sandbox e6d697ce165aeba15e0735280015ef0635ddb208db8e1a129f7649b76336b7ef with infra container: kube-system/kindnet-49wbb/POD" id=e93274e2-a0d4-4c06-a8c2-dd4f88b47b3a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:57:13 newest-cni-161621 crio[836]: time="2025-10-02T21:57:13.198685395Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=907a9724-ec0a-4675-9aca-43313be7cdb9 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:57:13 newest-cni-161621 crio[836]: time="2025-10-02T21:57:13.204215079Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=46c6db8c-0c81-4768-87b9-54ac50a76634 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:57:13 newest-cni-161621 crio[836]: time="2025-10-02T21:57:13.218205867Z" level=info msg="Creating container: kube-system/kindnet-49wbb/kindnet-cni" id=d1e46dab-604e-4add-8380-540b52d54065 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:13 newest-cni-161621 crio[836]: time="2025-10-02T21:57:13.218491562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:13 newest-cni-161621 crio[836]: time="2025-10-02T21:57:13.222806878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:13 newest-cni-161621 crio[836]: time="2025-10-02T21:57:13.224389007Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:13 newest-cni-161621 crio[836]: time="2025-10-02T21:57:13.313178994Z" level=info msg="Created container 2255a5f4864a0f0a225279b4ece74b64a0f6c0e33f7ddf198dc147e65545f34c: kube-system/kindnet-49wbb/kindnet-cni" id=d1e46dab-604e-4add-8380-540b52d54065 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:13 newest-cni-161621 crio[836]: time="2025-10-02T21:57:13.316997027Z" level=info msg="Starting container: 2255a5f4864a0f0a225279b4ece74b64a0f6c0e33f7ddf198dc147e65545f34c" id=06a0e792-2422-4d3a-9a7c-923a991df844 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:57:13 newest-cni-161621 crio[836]: time="2025-10-02T21:57:13.322555527Z" level=info msg="Started container" PID=1456 containerID=2255a5f4864a0f0a225279b4ece74b64a0f6c0e33f7ddf198dc147e65545f34c description=kube-system/kindnet-49wbb/kindnet-cni id=06a0e792-2422-4d3a-9a7c-923a991df844 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e6d697ce165aeba15e0735280015ef0635ddb208db8e1a129f7649b76336b7ef
	Oct 02 21:57:14 newest-cni-161621 crio[836]: time="2025-10-02T21:57:14.335763991Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-dgplp/POD" id=204b4b6f-a640-4437-8afe-73499f8fe10d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:57:14 newest-cni-161621 crio[836]: time="2025-10-02T21:57:14.335843997Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:14 newest-cni-161621 crio[836]: time="2025-10-02T21:57:14.339214059Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=204b4b6f-a640-4437-8afe-73499f8fe10d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:57:14 newest-cni-161621 crio[836]: time="2025-10-02T21:57:14.342229011Z" level=info msg="Ran pod sandbox c33a7f5ab98bb9356f791911e089f87dd01ed8f133312a475a13f26e1c8be7a8 with infra container: kube-system/kube-proxy-dgplp/POD" id=204b4b6f-a640-4437-8afe-73499f8fe10d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:57:14 newest-cni-161621 crio[836]: time="2025-10-02T21:57:14.345435327Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=22f08391-af24-4d13-9f5d-daf70fd6c165 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:57:14 newest-cni-161621 crio[836]: time="2025-10-02T21:57:14.346665637Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ba7f7825-d425-4940-ac72-d14845549bbe name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:57:14 newest-cni-161621 crio[836]: time="2025-10-02T21:57:14.35196606Z" level=info msg="Creating container: kube-system/kube-proxy-dgplp/kube-proxy" id=b1bcedc4-3733-4e32-bb0f-dddbba04422c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:14 newest-cni-161621 crio[836]: time="2025-10-02T21:57:14.352256325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:14 newest-cni-161621 crio[836]: time="2025-10-02T21:57:14.35745138Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:14 newest-cni-161621 crio[836]: time="2025-10-02T21:57:14.358414662Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:14 newest-cni-161621 crio[836]: time="2025-10-02T21:57:14.3799323Z" level=info msg="Created container e2e261fdb549e3194fc02ceb6b39cc0aae86a6c2fe0614e22836ef3154eda5d1: kube-system/kube-proxy-dgplp/kube-proxy" id=b1bcedc4-3733-4e32-bb0f-dddbba04422c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:14 newest-cni-161621 crio[836]: time="2025-10-02T21:57:14.386096216Z" level=info msg="Starting container: e2e261fdb549e3194fc02ceb6b39cc0aae86a6c2fe0614e22836ef3154eda5d1" id=2c31912f-7e50-4b34-81d2-31d992c8912d name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:57:14 newest-cni-161621 crio[836]: time="2025-10-02T21:57:14.391893388Z" level=info msg="Started container" PID=1540 containerID=e2e261fdb549e3194fc02ceb6b39cc0aae86a6c2fe0614e22836ef3154eda5d1 description=kube-system/kube-proxy-dgplp/kube-proxy id=2c31912f-7e50-4b34-81d2-31d992c8912d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c33a7f5ab98bb9356f791911e089f87dd01ed8f133312a475a13f26e1c8be7a8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e2e261fdb549e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   1 second ago        Running             kube-proxy                0                   c33a7f5ab98bb       kube-proxy-dgplp                            kube-system
	2255a5f4864a0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   3 seconds ago       Running             kindnet-cni               0                   e6d697ce165ae       kindnet-49wbb                               kube-system
	8f6fec600c0df       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago      Running             kube-scheduler            0                   c1ccae88d6dcb       kube-scheduler-newest-cni-161621            kube-system
	50a04e9bc241c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   22 seconds ago      Running             etcd                      0                   6dfa6789c18cf       etcd-newest-cni-161621                      kube-system
	a2aabc6881645       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   22 seconds ago      Running             kube-controller-manager   0                   5821c8628ac3d       kube-controller-manager-newest-cni-161621   kube-system
	23d91eb716e92       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   22 seconds ago      Running             kube-apiserver            0                   309105f114c44       kube-apiserver-newest-cni-161621            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-161621
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-161621
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=newest-cni-161621
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_57_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:57:02 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-161621
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:57:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:57:06 +0000   Thu, 02 Oct 2025 21:56:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:57:06 +0000   Thu, 02 Oct 2025 21:56:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:57:06 +0000   Thu, 02 Oct 2025 21:56:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 02 Oct 2025 21:57:06 +0000   Thu, 02 Oct 2025 21:56:54 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-161621
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 91de8952c36549ec847d58dcf4a674d0
	  System UUID:                b330f332-73e4-4f26-be0f-56d7dcddd7b6
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-161621                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13s
	  kube-system                 kindnet-49wbb                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-161621             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 kube-controller-manager-newest-cni-161621    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 kube-proxy-dgplp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-161621             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Warning  CgroupV1                 23s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node newest-cni-161621 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node newest-cni-161621 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x8 over 23s)  kubelet          Node newest-cni-161621 status is now: NodeHasSufficientPID
	  Normal   Starting                 10s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10s                kubelet          Node newest-cni-161621 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10s                kubelet          Node newest-cni-161621 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10s                kubelet          Node newest-cni-161621 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-161621 event: Registered Node newest-cni-161621 in Controller
	
	
	==> dmesg <==
	[ +24.621115] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +39.734560] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[  +6.128952] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[  +5.098616] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:55] overlayfs: idmapped layers are currently not supported
	[  +9.226554] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [50a04e9bc241c5b4b228f7cdfae6a03a9ee186e653a650fbcb3071a607f9ff22] <==
	{"level":"warn","ts":"2025-10-02T21:56:58.932406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:58.953845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:58.974987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.019679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.031619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.047377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.069696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.092082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.115009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.152457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.221582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.265620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.309677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.401491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.408154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.464010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.512092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.570805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.583153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.604004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.675929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.694728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.746190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:56:59.810174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:00.049089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39506","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:57:16 up  6:39,  0 user,  load average: 6.17, 4.03, 2.57
	Linux newest-cni-161621 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2255a5f4864a0f0a225279b4ece74b64a0f6c0e33f7ddf198dc147e65545f34c] <==
	I1002 21:57:13.506280       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:57:13.506524       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 21:57:13.506660       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:57:13.506672       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:57:13.506682       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:57:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:57:13.706239       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:57:13.706268       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:57:13.706285       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:57:13.706731       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [23d91eb716e92d3ea6b530b63a5b5f6f5c44f10879a54b5cfc0ccac716454017] <==
	I1002 21:57:02.989894       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 21:57:02.995443       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:57:03.043323       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 21:57:03.044246       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 21:57:03.072999       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:57:03.080865       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1002 21:57:03.116081       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:57:03.118271       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:57:03.235756       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 21:57:03.316795       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 21:57:03.316823       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:57:05.064270       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:57:05.181711       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:57:05.426896       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 21:57:05.451132       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1002 21:57:05.452302       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 21:57:05.532532       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:57:06.294412       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:57:06.303449       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:57:06.318870       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 21:57:06.329847       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 21:57:11.644775       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 21:57:12.364077       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:57:12.376454       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:57:12.392182       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a2aabc6881645a09cc2df70d130d1cea371b5e346f9348c7f54f9f73821b0643] <==
	I1002 21:57:11.447418       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 21:57:11.447451       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 21:57:11.456780       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 21:57:11.461271       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:57:11.465290       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 21:57:11.474770       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:57:11.474849       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:57:11.474879       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:57:11.484603       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 21:57:11.485051       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 21:57:11.485124       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 21:57:11.485205       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 21:57:11.485525       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:57:11.486852       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 21:57:11.486931       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 21:57:11.497205       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 21:57:11.498685       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 21:57:11.500148       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 21:57:11.500228       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 21:57:11.500308       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 21:57:11.500702       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 21:57:11.497308       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 21:57:11.502806       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 21:57:11.515665       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 21:57:11.542558       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-161621" podCIDRs=["10.42.0.0/24"]
	
	
	==> kube-proxy [e2e261fdb549e3194fc02ceb6b39cc0aae86a6c2fe0614e22836ef3154eda5d1] <==
	I1002 21:57:14.436260       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:57:14.533170       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:57:14.634623       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:57:14.638704       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 21:57:14.639351       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:57:14.739427       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:57:14.739476       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:57:14.770340       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:57:14.770648       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:57:14.770670       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:57:14.772013       1 config.go:200] "Starting service config controller"
	I1002 21:57:14.772030       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:57:14.772046       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:57:14.772051       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:57:14.772061       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:57:14.772065       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:57:14.799029       1 config.go:309] "Starting node config controller"
	I1002 21:57:14.799049       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:57:14.799057       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:57:14.872968       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:57:14.873000       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:57:14.873032       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8f6fec600c0df726a0ed6e731ae18418b41cf5079b1d09f4d6767e20d24daac3] <==
	I1002 21:57:00.398890       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:57:05.417136       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:57:05.417240       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:57:05.433933       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 21:57:05.439281       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 21:57:05.439229       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:57:05.439352       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:57:05.439255       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:57:05.439661       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:57:05.440418       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:57:05.440485       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:57:05.548081       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:57:05.548148       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 21:57:05.548239       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 21:57:06 newest-cni-161621 kubelet[1300]: I1002 21:57:06.999147    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00120214c54618c1b3626c04049db61b-etc-ca-certificates\") pod \"kube-apiserver-newest-cni-161621\" (UID: \"00120214c54618c1b3626c04049db61b\") " pod="kube-system/kube-apiserver-newest-cni-161621"
	Oct 02 21:57:06 newest-cni-161621 kubelet[1300]: I1002 21:57:06.999167    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00120214c54618c1b3626c04049db61b-usr-local-share-ca-certificates\") pod \"kube-apiserver-newest-cni-161621\" (UID: \"00120214c54618c1b3626c04049db61b\") " pod="kube-system/kube-apiserver-newest-cni-161621"
	Oct 02 21:57:06 newest-cni-161621 kubelet[1300]: I1002 21:57:06.999224    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e867b4f25796c0d2bcb7085862f55592-ca-certs\") pod \"kube-controller-manager-newest-cni-161621\" (UID: \"e867b4f25796c0d2bcb7085862f55592\") " pod="kube-system/kube-controller-manager-newest-cni-161621"
	Oct 02 21:57:06 newest-cni-161621 kubelet[1300]: I1002 21:57:06.999251    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e867b4f25796c0d2bcb7085862f55592-flexvolume-dir\") pod \"kube-controller-manager-newest-cni-161621\" (UID: \"e867b4f25796c0d2bcb7085862f55592\") " pod="kube-system/kube-controller-manager-newest-cni-161621"
	Oct 02 21:57:06 newest-cni-161621 kubelet[1300]: I1002 21:57:06.999298    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e867b4f25796c0d2bcb7085862f55592-usr-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-161621\" (UID: \"e867b4f25796c0d2bcb7085862f55592\") " pod="kube-system/kube-controller-manager-newest-cni-161621"
	Oct 02 21:57:06 newest-cni-161621 kubelet[1300]: I1002 21:57:06.999325    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00120214c54618c1b3626c04049db61b-usr-share-ca-certificates\") pod \"kube-apiserver-newest-cni-161621\" (UID: \"00120214c54618c1b3626c04049db61b\") " pod="kube-system/kube-apiserver-newest-cni-161621"
	Oct 02 21:57:06 newest-cni-161621 kubelet[1300]: I1002 21:57:06.999390    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e867b4f25796c0d2bcb7085862f55592-k8s-certs\") pod \"kube-controller-manager-newest-cni-161621\" (UID: \"e867b4f25796c0d2bcb7085862f55592\") " pod="kube-system/kube-controller-manager-newest-cni-161621"
	Oct 02 21:57:07 newest-cni-161621 kubelet[1300]: I1002 21:57:07.415606    1300 apiserver.go:52] "Watching apiserver"
	Oct 02 21:57:07 newest-cni-161621 kubelet[1300]: I1002 21:57:07.489904    1300 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 02 21:57:11 newest-cni-161621 kubelet[1300]: I1002 21:57:11.532474    1300 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 02 21:57:11 newest-cni-161621 kubelet[1300]: I1002 21:57:11.533394    1300 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 02 21:57:12 newest-cni-161621 kubelet[1300]: I1002 21:57:12.574174    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7efc7ea2-18d3-49a1-a3eb-1c2767978396-cni-cfg\") pod \"kindnet-49wbb\" (UID: \"7efc7ea2-18d3-49a1-a3eb-1c2767978396\") " pod="kube-system/kindnet-49wbb"
	Oct 02 21:57:12 newest-cni-161621 kubelet[1300]: I1002 21:57:12.578208    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7efc7ea2-18d3-49a1-a3eb-1c2767978396-xtables-lock\") pod \"kindnet-49wbb\" (UID: \"7efc7ea2-18d3-49a1-a3eb-1c2767978396\") " pod="kube-system/kindnet-49wbb"
	Oct 02 21:57:12 newest-cni-161621 kubelet[1300]: I1002 21:57:12.578440    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmpnt\" (UniqueName: \"kubernetes.io/projected/7efc7ea2-18d3-49a1-a3eb-1c2767978396-kube-api-access-lmpnt\") pod \"kindnet-49wbb\" (UID: \"7efc7ea2-18d3-49a1-a3eb-1c2767978396\") " pod="kube-system/kindnet-49wbb"
	Oct 02 21:57:12 newest-cni-161621 kubelet[1300]: I1002 21:57:12.578598    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7a0b577-70f7-44ba-8990-3694f9fcc965-lib-modules\") pod \"kube-proxy-dgplp\" (UID: \"b7a0b577-70f7-44ba-8990-3694f9fcc965\") " pod="kube-system/kube-proxy-dgplp"
	Oct 02 21:57:12 newest-cni-161621 kubelet[1300]: I1002 21:57:12.578735    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7efc7ea2-18d3-49a1-a3eb-1c2767978396-lib-modules\") pod \"kindnet-49wbb\" (UID: \"7efc7ea2-18d3-49a1-a3eb-1c2767978396\") " pod="kube-system/kindnet-49wbb"
	Oct 02 21:57:12 newest-cni-161621 kubelet[1300]: I1002 21:57:12.578845    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7a0b577-70f7-44ba-8990-3694f9fcc965-xtables-lock\") pod \"kube-proxy-dgplp\" (UID: \"b7a0b577-70f7-44ba-8990-3694f9fcc965\") " pod="kube-system/kube-proxy-dgplp"
	Oct 02 21:57:12 newest-cni-161621 kubelet[1300]: I1002 21:57:12.578943    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwfxz\" (UniqueName: \"kubernetes.io/projected/b7a0b577-70f7-44ba-8990-3694f9fcc965-kube-api-access-fwfxz\") pod \"kube-proxy-dgplp\" (UID: \"b7a0b577-70f7-44ba-8990-3694f9fcc965\") " pod="kube-system/kube-proxy-dgplp"
	Oct 02 21:57:12 newest-cni-161621 kubelet[1300]: I1002 21:57:12.579044    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b7a0b577-70f7-44ba-8990-3694f9fcc965-kube-proxy\") pod \"kube-proxy-dgplp\" (UID: \"b7a0b577-70f7-44ba-8990-3694f9fcc965\") " pod="kube-system/kube-proxy-dgplp"
	Oct 02 21:57:12 newest-cni-161621 kubelet[1300]: E1002 21:57:12.584120    1300 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:newest-cni-161621\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-161621' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 02 21:57:12 newest-cni-161621 kubelet[1300]: I1002 21:57:12.827253    1300 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 21:57:13 newest-cni-161621 kubelet[1300]: E1002 21:57:13.683707    1300 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 02 21:57:13 newest-cni-161621 kubelet[1300]: E1002 21:57:13.683823    1300 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b7a0b577-70f7-44ba-8990-3694f9fcc965-kube-proxy podName:b7a0b577-70f7-44ba-8990-3694f9fcc965 nodeName:}" failed. No retries permitted until 2025-10-02 21:57:14.183796325 +0000 UTC m=+7.961073394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b7a0b577-70f7-44ba-8990-3694f9fcc965-kube-proxy") pod "kube-proxy-dgplp" (UID: "b7a0b577-70f7-44ba-8990-3694f9fcc965") : failed to sync configmap cache: timed out waiting for the condition
	Oct 02 21:57:14 newest-cni-161621 kubelet[1300]: I1002 21:57:14.934364    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dgplp" podStartSLOduration=2.93434522 podStartE2EDuration="2.93434522s" podCreationTimestamp="2025-10-02 21:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:57:14.934266109 +0000 UTC m=+8.711543186" watchObservedRunningTime="2025-10-02 21:57:14.93434522 +0000 UTC m=+8.711622289"
	Oct 02 21:57:14 newest-cni-161621 kubelet[1300]: I1002 21:57:14.934718    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-49wbb" podStartSLOduration=2.934710716 podStartE2EDuration="2.934710716s" podCreationTimestamp="2025-10-02 21:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 21:57:13.916220668 +0000 UTC m=+7.693497761" watchObservedRunningTime="2025-10-02 21:57:14.934710716 +0000 UTC m=+8.711987785"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-161621 -n newest-cni-161621
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-161621 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-ntw6h storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-161621 describe pod coredns-66bc5c9577-ntw6h storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-161621 describe pod coredns-66bc5c9577-ntw6h storage-provisioner: exit status 1 (125.00936ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-ntw6h" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-161621 describe pod coredns-66bc5c9577-ntw6h storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-161621 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-161621 --alsologtostderr -v=1: exit status 80 (1.736777133s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-161621 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:57:36.508845 1211232 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:57:36.508988 1211232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:57:36.509033 1211232 out.go:374] Setting ErrFile to fd 2...
	I1002 21:57:36.509054 1211232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:57:36.509343 1211232 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:57:36.509646 1211232 out.go:368] Setting JSON to false
	I1002 21:57:36.509705 1211232 mustload.go:65] Loading cluster: newest-cni-161621
	I1002 21:57:36.510176 1211232 config.go:182] Loaded profile config "newest-cni-161621": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:57:36.510762 1211232 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:36.529071 1211232 host.go:66] Checking if "newest-cni-161621" exists ...
	I1002 21:57:36.529548 1211232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:57:36.590251 1211232 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 21:57:36.580088396 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:57:36.591272 1211232 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-161621 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 21:57:36.596914 1211232 out.go:179] * Pausing node newest-cni-161621 ... 
	I1002 21:57:36.599909 1211232 host.go:66] Checking if "newest-cni-161621" exists ...
	I1002 21:57:36.600281 1211232 ssh_runner.go:195] Run: systemctl --version
	I1002 21:57:36.600331 1211232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:36.620555 1211232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:36.721846 1211232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:57:36.737762 1211232 pause.go:51] kubelet running: true
	I1002 21:57:36.737858 1211232 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:57:36.985907 1211232 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:57:36.985990 1211232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:57:37.079302 1211232 cri.go:89] found id: "d39f9d09819ecc40f2d571e3d5fcfc97fcae9fbf57a5dd00ecd9cee62125d1a8"
	I1002 21:57:37.079366 1211232 cri.go:89] found id: "9f46faf037e3b26311d57bbd2403f4ba97123b93d5abf348c16012b46b80b4f1"
	I1002 21:57:37.079377 1211232 cri.go:89] found id: "c40c6f2c6708756dcd9e411f67e0ef77b2a41cd885fdc94a2b014b396c107329"
	I1002 21:57:37.079382 1211232 cri.go:89] found id: "b078018388b1265576c4528a0a0db92ebf50836da58a04935c814d0163d732b8"
	I1002 21:57:37.079412 1211232 cri.go:89] found id: "75092104da2923b9ef504bb95a2cf821338c0a5b4f18a2923b106161a85c4d68"
	I1002 21:57:37.079419 1211232 cri.go:89] found id: "2a823c2b3de352016dd9f4c479498ee6603ea9699523f15b7819586a24c96d1c"
	I1002 21:57:37.079423 1211232 cri.go:89] found id: ""
	I1002 21:57:37.079471 1211232 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:57:37.089993 1211232 retry.go:31] will retry after 142.047741ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:57:37Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:57:37.232337 1211232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:57:37.250488 1211232 pause.go:51] kubelet running: false
	I1002 21:57:37.250569 1211232 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:57:37.489826 1211232 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:57:37.489966 1211232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:57:37.563690 1211232 cri.go:89] found id: "d39f9d09819ecc40f2d571e3d5fcfc97fcae9fbf57a5dd00ecd9cee62125d1a8"
	I1002 21:57:37.563762 1211232 cri.go:89] found id: "9f46faf037e3b26311d57bbd2403f4ba97123b93d5abf348c16012b46b80b4f1"
	I1002 21:57:37.563784 1211232 cri.go:89] found id: "c40c6f2c6708756dcd9e411f67e0ef77b2a41cd885fdc94a2b014b396c107329"
	I1002 21:57:37.563805 1211232 cri.go:89] found id: "b078018388b1265576c4528a0a0db92ebf50836da58a04935c814d0163d732b8"
	I1002 21:57:37.563840 1211232 cri.go:89] found id: "75092104da2923b9ef504bb95a2cf821338c0a5b4f18a2923b106161a85c4d68"
	I1002 21:57:37.563861 1211232 cri.go:89] found id: "2a823c2b3de352016dd9f4c479498ee6603ea9699523f15b7819586a24c96d1c"
	I1002 21:57:37.563879 1211232 cri.go:89] found id: ""
	I1002 21:57:37.563962 1211232 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:57:37.575780 1211232 retry.go:31] will retry after 297.357909ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:57:37Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:57:37.874116 1211232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:57:37.887402 1211232 pause.go:51] kubelet running: false
	I1002 21:57:37.887524 1211232 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:57:38.083596 1211232 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:57:38.083742 1211232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:57:38.153562 1211232 cri.go:89] found id: "d39f9d09819ecc40f2d571e3d5fcfc97fcae9fbf57a5dd00ecd9cee62125d1a8"
	I1002 21:57:38.153635 1211232 cri.go:89] found id: "9f46faf037e3b26311d57bbd2403f4ba97123b93d5abf348c16012b46b80b4f1"
	I1002 21:57:38.153655 1211232 cri.go:89] found id: "c40c6f2c6708756dcd9e411f67e0ef77b2a41cd885fdc94a2b014b396c107329"
	I1002 21:57:38.153675 1211232 cri.go:89] found id: "b078018388b1265576c4528a0a0db92ebf50836da58a04935c814d0163d732b8"
	I1002 21:57:38.153695 1211232 cri.go:89] found id: "75092104da2923b9ef504bb95a2cf821338c0a5b4f18a2923b106161a85c4d68"
	I1002 21:57:38.153730 1211232 cri.go:89] found id: "2a823c2b3de352016dd9f4c479498ee6603ea9699523f15b7819586a24c96d1c"
	I1002 21:57:38.153750 1211232 cri.go:89] found id: ""
	I1002 21:57:38.153828 1211232 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:57:38.171045 1211232 out.go:203] 
	W1002 21:57:38.173901 1211232 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:57:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:57:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:57:38.173920 1211232 out.go:285] * 
	* 
	W1002 21:57:38.182342 1211232 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:57:38.185319 1211232 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-161621 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-161621
helpers_test.go:243: (dbg) docker inspect newest-cni-161621:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad",
	        "Created": "2025-10-02T21:56:36.536154593Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1209470,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:57:20.065464254Z",
	            "FinishedAt": "2025-10-02T21:57:18.897759227Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad/hostname",
	        "HostsPath": "/var/lib/docker/containers/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad/hosts",
	        "LogPath": "/var/lib/docker/containers/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad-json.log",
	        "Name": "/newest-cni-161621",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-161621:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-161621",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad",
	                "LowerDir": "/var/lib/docker/overlay2/82468ade2c550bcf18f9beb969d80c75d06c3a36df17c029f9f82c0a2a5dab59-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/82468ade2c550bcf18f9beb969d80c75d06c3a36df17c029f9f82c0a2a5dab59/merged",
	                "UpperDir": "/var/lib/docker/overlay2/82468ade2c550bcf18f9beb969d80c75d06c3a36df17c029f9f82c0a2a5dab59/diff",
	                "WorkDir": "/var/lib/docker/overlay2/82468ade2c550bcf18f9beb969d80c75d06c3a36df17c029f9f82c0a2a5dab59/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-161621",
	                "Source": "/var/lib/docker/volumes/newest-cni-161621/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-161621",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-161621",
	                "name.minikube.sigs.k8s.io": "newest-cni-161621",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b15cb14e879923d587263a5ee23238d096e7465c585bde759ec7f18c81c30082",
	            "SandboxKey": "/var/run/docker/netns/b15cb14e8799",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34226"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34227"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34230"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34228"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34229"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-161621": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:74:16:aa:94:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "155fa7e09312067d402503432852b0577257ec8e8857e0c34bee66c5c9279cb6",
	                    "EndpointID": "b354b3c63139c3e3dfb12995ed19c323e6127233889d42a7b4b4406a045e61dd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-161621",
	                        "4274608d314f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-161621 -n newest-cni-161621
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-161621 -n newest-cni-161621: exit status 2 (340.059834ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-161621 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-161621 logs -n 25: (1.07596706s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-661954 image list --format=json                                                                                                                                                                                                    │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ pause   │ -p no-preload-661954 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-132977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	│ delete  │ -p no-preload-661954                                                                                                                                                                                                                          │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ stop    │ -p embed-certs-132977 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:55 UTC │
	│ delete  │ -p no-preload-661954                                                                                                                                                                                                                          │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ delete  │ -p disable-driver-mounts-013352                                                                                                                                                                                                               │ disable-driver-mounts-013352 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ start   │ -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-132977 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:55 UTC │
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:56 UTC │
	│ image   │ embed-certs-132977 image list --format=json                                                                                                                                                                                                   │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ pause   │ -p embed-certs-132977 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	│ delete  │ -p embed-certs-132977                                                                                                                                                                                                                         │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ delete  │ -p embed-certs-132977                                                                                                                                                                                                                         │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ start   │ -p newest-cni-161621 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-842185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-842185 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-842185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ start   │ -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-161621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │                     │
	│ stop    │ -p newest-cni-161621 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ addons  │ enable dashboard -p newest-cni-161621 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ start   │ -p newest-cni-161621 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ image   │ newest-cni-161621 image list --format=json                                                                                                                                                                                                    │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ pause   │ -p newest-cni-161621 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:57:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:57:19.653367 1209333 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:57:19.653488 1209333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:57:19.653554 1209333 out.go:374] Setting ErrFile to fd 2...
	I1002 21:57:19.653559 1209333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:57:19.653814 1209333 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:57:19.654396 1209333 out.go:368] Setting JSON to false
	I1002 21:57:19.655392 1209333 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23977,"bootTime":1759418263,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:57:19.655465 1209333 start.go:140] virtualization:  
	I1002 21:57:19.660113 1209333 out.go:179] * [newest-cni-161621] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:57:19.664323 1209333 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:57:19.664438 1209333 notify.go:221] Checking for updates...
	I1002 21:57:19.671834 1209333 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:57:19.674920 1209333 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:57:19.678355 1209333 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:57:19.681247 1209333 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:57:19.684471 1209333 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:57:19.688960 1209333 config.go:182] Loaded profile config "newest-cni-161621": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:57:19.689527 1209333 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:57:19.722779 1209333 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:57:19.722899 1209333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:57:19.838427 1209333 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:57:19.827278511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:57:19.838534 1209333 docker.go:319] overlay module found
	I1002 21:57:19.843798 1209333 out.go:179] * Using the docker driver based on existing profile
	I1002 21:57:19.847179 1209333 start.go:306] selected driver: docker
	I1002 21:57:19.847203 1209333 start.go:936] validating driver "docker" against &{Name:newest-cni-161621 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-161621 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:57:19.847308 1209333 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:57:19.847999 1209333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:57:19.945710 1209333 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:57:19.935291017 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:57:19.946170 1209333 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 21:57:19.946199 1209333 cni.go:84] Creating CNI manager for ""
	I1002 21:57:19.946257 1209333 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:57:19.946296 1209333 start.go:350] cluster config:
	{Name:newest-cni-161621 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-161621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:57:19.950114 1209333 out.go:179] * Starting "newest-cni-161621" primary control-plane node in "newest-cni-161621" cluster
	I1002 21:57:19.953231 1209333 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:57:19.956301 1209333 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:57:19.959302 1209333 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:57:19.959362 1209333 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:57:19.959377 1209333 cache.go:59] Caching tarball of preloaded images
	I1002 21:57:19.959402 1209333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:57:19.959469 1209333 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:57:19.959480 1209333 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:57:19.959604 1209333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/config.json ...
	I1002 21:57:19.983649 1209333 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:57:19.983670 1209333 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:57:19.983683 1209333 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:57:19.983705 1209333 start.go:361] acquireMachinesLock for newest-cni-161621: {Name:mk369c5d3d45aed0e984b21d641c17abd7d1dc57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:57:19.983756 1209333 start.go:365] duration metric: took 35.437µs to acquireMachinesLock for "newest-cni-161621"
	I1002 21:57:19.983776 1209333 start.go:97] Skipping create...Using existing machine configuration
	I1002 21:57:19.983780 1209333 fix.go:55] fixHost starting: 
	I1002 21:57:19.984037 1209333 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:20.006194 1209333 fix.go:113] recreateIfNeeded on newest-cni-161621: state=Stopped err=<nil>
	W1002 21:57:20.006227 1209333 fix.go:139] unexpected machine state, will restart: <nil>
	W1002 21:57:17.181820 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	W1002 21:57:19.676795 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	I1002 21:57:20.013780 1209333 out.go:252] * Restarting existing docker container for "newest-cni-161621" ...
	I1002 21:57:20.013904 1209333 cli_runner.go:164] Run: docker start newest-cni-161621
	I1002 21:57:20.351563 1209333 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:20.382146 1209333 kic.go:430] container "newest-cni-161621" state is running.
	I1002 21:57:20.382506 1209333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-161621
	I1002 21:57:20.404551 1209333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/config.json ...
	I1002 21:57:20.404771 1209333 machine.go:93] provisionDockerMachine start ...
	I1002 21:57:20.404846 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:20.429042 1209333 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:20.429370 1209333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34226 <nil> <nil>}
	I1002 21:57:20.429383 1209333 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:57:20.430187 1209333 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 21:57:23.570579 1209333 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-161621
	
	I1002 21:57:23.570662 1209333 ubuntu.go:182] provisioning hostname "newest-cni-161621"
	I1002 21:57:23.570773 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:23.591837 1209333 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:23.592146 1209333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34226 <nil> <nil>}
	I1002 21:57:23.592158 1209333 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-161621 && echo "newest-cni-161621" | sudo tee /etc/hostname
	I1002 21:57:23.751647 1209333 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-161621
	
	I1002 21:57:23.751729 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:23.775958 1209333 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:23.776275 1209333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34226 <nil> <nil>}
	I1002 21:57:23.776298 1209333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-161621' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-161621/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-161621' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:57:23.930655 1209333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:57:23.930690 1209333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:57:23.930713 1209333 ubuntu.go:190] setting up certificates
	I1002 21:57:23.930726 1209333 provision.go:84] configureAuth start
	I1002 21:57:23.930812 1209333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-161621
	I1002 21:57:23.959908 1209333 provision.go:143] copyHostCerts
	I1002 21:57:23.959979 1209333 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:57:23.960004 1209333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:57:23.960079 1209333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:57:23.960185 1209333 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:57:23.960196 1209333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:57:23.960224 1209333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:57:23.960293 1209333 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:57:23.960302 1209333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:57:23.960327 1209333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:57:23.960386 1209333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.newest-cni-161621 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-161621]
	I1002 21:57:24.059563 1209333 provision.go:177] copyRemoteCerts
	I1002 21:57:24.059656 1209333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:57:24.059712 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:24.082022 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:24.183403 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:57:24.202881 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 21:57:24.222656 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:57:24.246489 1209333 provision.go:87] duration metric: took 315.742704ms to configureAuth
	I1002 21:57:24.246558 1209333 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:57:24.246786 1209333 config.go:182] Loaded profile config "newest-cni-161621": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:57:24.246929 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:24.268483 1209333 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:24.268796 1209333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34226 <nil> <nil>}
	I1002 21:57:24.268811 1209333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:57:24.585325 1209333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:57:24.585391 1209333 machine.go:96] duration metric: took 4.180601981s to provisionDockerMachine
	I1002 21:57:24.585432 1209333 start.go:294] postStartSetup for "newest-cni-161621" (driver="docker")
	I1002 21:57:24.585472 1209333 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:57:24.585551 1209333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:57:24.585620 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:24.605521 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	W1002 21:57:22.164945 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	W1002 21:57:24.165283 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	I1002 21:57:24.716111 1209333 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:57:24.720316 1209333 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:57:24.720343 1209333 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:57:24.720355 1209333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:57:24.720406 1209333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:57:24.720480 1209333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:57:24.720589 1209333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:57:24.729758 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:57:24.751205 1209333 start.go:297] duration metric: took 165.729546ms for postStartSetup
	I1002 21:57:24.751360 1209333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:57:24.751423 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:24.772318 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:24.867971 1209333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:57:24.873679 1209333 fix.go:57] duration metric: took 4.889890646s for fixHost
	I1002 21:57:24.873708 1209333 start.go:84] releasing machines lock for "newest-cni-161621", held for 4.889943149s
	I1002 21:57:24.873785 1209333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-161621
	I1002 21:57:24.896796 1209333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:57:24.896871 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:24.896980 1209333 ssh_runner.go:195] Run: cat /version.json
	I1002 21:57:24.897018 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:24.934577 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:24.938796 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:25.050112 1209333 ssh_runner.go:195] Run: systemctl --version
	I1002 21:57:25.149339 1209333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:57:25.211052 1209333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:57:25.219040 1209333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:57:25.219152 1209333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:57:25.230078 1209333 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:57:25.230106 1209333 start.go:496] detecting cgroup driver to use...
	I1002 21:57:25.230138 1209333 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:57:25.230199 1209333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:57:25.247868 1209333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:57:25.262471 1209333 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:57:25.262545 1209333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:57:25.279966 1209333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:57:25.294853 1209333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:57:25.440394 1209333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:57:25.599617 1209333 docker.go:234] disabling docker service ...
	I1002 21:57:25.599721 1209333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:57:25.615660 1209333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:57:25.629297 1209333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:57:25.791842 1209333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:57:25.940498 1209333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:57:25.956033 1209333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:57:25.971393 1209333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:57:25.971471 1209333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:25.981072 1209333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:57:25.981155 1209333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:25.991192 1209333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:26.001384 1209333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:26.012425 1209333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:57:26.022089 1209333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:26.032743 1209333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:26.042370 1209333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:26.052553 1209333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:57:26.061521 1209333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:57:26.070605 1209333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:57:26.219123 1209333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:57:26.813533 1209333 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:57:26.813608 1209333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:57:26.824287 1209333 start.go:564] Will wait 60s for crictl version
	I1002 21:57:26.824421 1209333 ssh_runner.go:195] Run: which crictl
	I1002 21:57:26.831959 1209333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:57:26.867066 1209333 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:57:26.867238 1209333 ssh_runner.go:195] Run: crio --version
	I1002 21:57:26.904395 1209333 ssh_runner.go:195] Run: crio --version
	I1002 21:57:26.944167 1209333 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:57:26.947245 1209333 cli_runner.go:164] Run: docker network inspect newest-cni-161621 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:57:26.965235 1209333 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 21:57:26.969259 1209333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:57:26.982932 1209333 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1002 21:57:26.985688 1209333 kubeadm.go:883] updating cluster {Name:newest-cni-161621 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-161621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:57:26.985825 1209333 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:57:26.985913 1209333 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:57:27.023959 1209333 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:57:27.023984 1209333 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:57:27.024041 1209333 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:57:27.056404 1209333 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:57:27.056430 1209333 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:57:27.056438 1209333 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 21:57:27.056548 1209333 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-161621 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-161621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:57:27.056644 1209333 ssh_runner.go:195] Run: crio config
	I1002 21:57:27.155301 1209333 cni.go:84] Creating CNI manager for ""
	I1002 21:57:27.155328 1209333 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:57:27.155346 1209333 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1002 21:57:27.155370 1209333 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-161621 NodeName:newest-cni-161621 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:57:27.155493 1209333 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-161621"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:57:27.155566 1209333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:57:27.166406 1209333 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:57:27.166481 1209333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:57:27.175543 1209333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 21:57:27.192116 1209333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:57:27.205485 1209333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1002 21:57:27.219751 1209333 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:57:27.223557 1209333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:57:27.233778 1209333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:57:27.386691 1209333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:57:27.407135 1209333 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621 for IP: 192.168.76.2
	I1002 21:57:27.407216 1209333 certs.go:195] generating shared ca certs ...
	I1002 21:57:27.407249 1209333 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:27.407459 1209333 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:57:27.407535 1209333 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:57:27.407589 1209333 certs.go:257] generating profile certs ...
	I1002 21:57:27.407717 1209333 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/client.key
	I1002 21:57:27.407835 1209333 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/apiserver.key.7bf184af
	I1002 21:57:27.407924 1209333 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/proxy-client.key
	I1002 21:57:27.408090 1209333 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:57:27.408148 1209333 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:57:27.408172 1209333 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:57:27.408235 1209333 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:57:27.408296 1209333 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:57:27.408358 1209333 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:57:27.408453 1209333 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:57:27.409185 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:57:27.456051 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:57:27.483166 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:57:27.504811 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:57:27.527685 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:57:27.574337 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:57:27.608570 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:57:27.634974 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:57:27.686062 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:57:27.714711 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:57:27.740103 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:57:27.777235 1209333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:57:27.796597 1209333 ssh_runner.go:195] Run: openssl version
	I1002 21:57:27.803776 1209333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:57:27.821073 1209333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:57:27.825212 1209333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:57:27.825323 1209333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:57:27.872742 1209333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:57:27.883690 1209333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:57:27.893317 1209333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:57:27.897303 1209333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:57:27.897412 1209333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:57:27.941169 1209333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:57:27.950613 1209333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:57:27.959170 1209333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:57:27.964193 1209333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:57:27.964307 1209333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:57:28.014816 1209333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:57:28.024478 1209333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:57:28.029187 1209333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:57:28.086868 1209333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:57:28.180141 1209333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:57:28.251079 1209333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:57:28.360569 1209333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:57:28.441450 1209333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:57:28.558624 1209333 kubeadm.go:400] StartCluster: {Name:newest-cni-161621 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-161621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:57:28.558736 1209333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:57:28.558808 1209333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:57:28.635257 1209333 cri.go:89] found id: "c40c6f2c6708756dcd9e411f67e0ef77b2a41cd885fdc94a2b014b396c107329"
	I1002 21:57:28.635327 1209333 cri.go:89] found id: "b078018388b1265576c4528a0a0db92ebf50836da58a04935c814d0163d732b8"
	I1002 21:57:28.635345 1209333 cri.go:89] found id: "75092104da2923b9ef504bb95a2cf821338c0a5b4f18a2923b106161a85c4d68"
	I1002 21:57:28.635365 1209333 cri.go:89] found id: "2a823c2b3de352016dd9f4c479498ee6603ea9699523f15b7819586a24c96d1c"
	I1002 21:57:28.635397 1209333 cri.go:89] found id: ""
	I1002 21:57:28.635468 1209333 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 21:57:28.685462 1209333 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:57:28Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:57:28.685590 1209333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:57:28.726552 1209333 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:57:28.726584 1209333 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:57:28.726636 1209333 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:57:28.771461 1209333 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:57:28.772050 1209333 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-161621" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:57:28.772310 1209333 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-992084/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-161621" cluster setting kubeconfig missing "newest-cni-161621" context setting]
	I1002 21:57:28.772806 1209333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:28.774404 1209333 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:57:28.792157 1209333 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 21:57:28.792232 1209333 kubeadm.go:601] duration metric: took 65.640499ms to restartPrimaryControlPlane
	I1002 21:57:28.792256 1209333 kubeadm.go:402] duration metric: took 233.641287ms to StartCluster
	I1002 21:57:28.792302 1209333 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:28.792383 1209333 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:57:28.793316 1209333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:28.793596 1209333 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:57:28.793936 1209333 config.go:182] Loaded profile config "newest-cni-161621": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:57:28.794017 1209333 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:57:28.794370 1209333 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-161621"
	I1002 21:57:28.794433 1209333 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-161621"
	I1002 21:57:28.794395 1209333 addons.go:69] Setting dashboard=true in profile "newest-cni-161621"
	I1002 21:57:28.794484 1209333 addons.go:238] Setting addon dashboard=true in "newest-cni-161621"
	W1002 21:57:28.794492 1209333 addons.go:247] addon dashboard should already be in state true
	W1002 21:57:28.794511 1209333 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:57:28.794556 1209333 host.go:66] Checking if "newest-cni-161621" exists ...
	I1002 21:57:28.794516 1209333 host.go:66] Checking if "newest-cni-161621" exists ...
	I1002 21:57:28.795169 1209333 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:28.795273 1209333 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:28.794402 1209333 addons.go:69] Setting default-storageclass=true in profile "newest-cni-161621"
	I1002 21:57:28.795661 1209333 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-161621"
	I1002 21:57:28.795931 1209333 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:28.799274 1209333 out.go:179] * Verifying Kubernetes components...
	I1002 21:57:28.808296 1209333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:57:28.849020 1209333 addons.go:238] Setting addon default-storageclass=true in "newest-cni-161621"
	W1002 21:57:28.849044 1209333 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:57:28.849068 1209333 host.go:66] Checking if "newest-cni-161621" exists ...
	I1002 21:57:28.849487 1209333 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:28.854885 1209333 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 21:57:28.860936 1209333 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 21:57:28.863906 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 21:57:28.863929 1209333 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 21:57:28.863995 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:28.875924 1209333 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:57:28.883047 1209333 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:57:28.883077 1209333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:57:28.883152 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:28.898231 1209333 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:57:28.898255 1209333 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:57:28.898323 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:28.934554 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:28.957350 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:28.958953 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:29.243168 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 21:57:29.243196 1209333 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 21:57:29.271414 1209333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:57:29.334354 1209333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:57:29.343994 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 21:57:29.344021 1209333 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 21:57:29.351054 1209333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:57:29.385363 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 21:57:29.385385 1209333 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 21:57:29.437260 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 21:57:29.437325 1209333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 21:57:29.470442 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 21:57:29.470505 1209333 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 21:57:29.529127 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 21:57:29.529154 1209333 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 21:57:29.579682 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 21:57:29.579709 1209333 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 21:57:29.606878 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 21:57:29.606944 1209333 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 21:57:29.635558 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:57:29.635631 1209333 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W1002 21:57:26.664463 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	W1002 21:57:28.671944 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	I1002 21:57:29.655436 1209333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:57:35.176139 1209333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.904685909s)
	I1002 21:57:35.176180 1209333 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.841801292s)
	I1002 21:57:35.176213 1209333 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:57:35.176269 1209333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:57:35.176328 1209333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.825250145s)
	I1002 21:57:35.176676 1209333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.521157589s)
	I1002 21:57:35.179837 1209333 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-161621 addons enable metrics-server
	
	I1002 21:57:35.199497 1209333 api_server.go:72] duration metric: took 6.405845331s to wait for apiserver process to appear ...
	I1002 21:57:35.199518 1209333 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:57:35.199535 1209333 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:57:35.213708 1209333 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:57:35.213786 1209333 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:57:35.219509 1209333 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1002 21:57:31.164433 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	W1002 21:57:33.165149 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	I1002 21:57:35.222452 1209333 addons.go:514] duration metric: took 6.428431856s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1002 21:57:35.699806 1209333 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:57:35.708531 1209333 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 21:57:35.709565 1209333 api_server.go:141] control plane version: v1.34.1
	I1002 21:57:35.709592 1209333 api_server.go:131] duration metric: took 510.066782ms to wait for apiserver health ...
	I1002 21:57:35.709601 1209333 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:57:35.713005 1209333 system_pods.go:59] 8 kube-system pods found
	I1002 21:57:35.713042 1209333 system_pods.go:61] "coredns-66bc5c9577-ntw6h" [970083c8-716b-436e-bc93-a888bb56d5d7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 21:57:35.713052 1209333 system_pods.go:61] "etcd-newest-cni-161621" [7ea80e5a-23cd-483a-915a-1e4d15b007d4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:57:35.713060 1209333 system_pods.go:61] "kindnet-49wbb" [7efc7ea2-18d3-49a1-a3eb-1c2767978396] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 21:57:35.713068 1209333 system_pods.go:61] "kube-apiserver-newest-cni-161621" [d07ea3be-9874-47c6-ba52-41218649668a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:57:35.713080 1209333 system_pods.go:61] "kube-controller-manager-newest-cni-161621" [7ae67be3-2685-4cae-8d49-9afaee6a47d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:57:35.713091 1209333 system_pods.go:61] "kube-proxy-dgplp" [b7a0b577-70f7-44ba-8990-3694f9fcc965] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 21:57:35.713106 1209333 system_pods.go:61] "kube-scheduler-newest-cni-161621" [1e8d8f40-f980-4e6d-b889-213f346a7488] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:57:35.713112 1209333 system_pods.go:61] "storage-provisioner" [410ea72e-053b-4627-bd9d-8182cc23f02a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 21:57:35.713118 1209333 system_pods.go:74] duration metric: took 3.511793ms to wait for pod list to return data ...
	I1002 21:57:35.713130 1209333 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:57:35.715612 1209333 default_sa.go:45] found service account: "default"
	I1002 21:57:35.715637 1209333 default_sa.go:55] duration metric: took 2.500702ms for default service account to be created ...
	I1002 21:57:35.715650 1209333 kubeadm.go:586] duration metric: took 6.922002841s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 21:57:35.715692 1209333 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:57:35.718867 1209333 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:57:35.718939 1209333 node_conditions.go:123] node cpu capacity is 2
	I1002 21:57:35.718967 1209333 node_conditions.go:105] duration metric: took 3.268492ms to run NodePressure ...
	I1002 21:57:35.718992 1209333 start.go:242] waiting for startup goroutines ...
	I1002 21:57:35.719019 1209333 start.go:247] waiting for cluster config update ...
	I1002 21:57:35.719032 1209333 start.go:256] writing updated cluster config ...
	I1002 21:57:35.719341 1209333 ssh_runner.go:195] Run: rm -f paused
	I1002 21:57:35.799137 1209333 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:57:35.804147 1209333 out.go:179] * Done! kubectl is now configured to use "newest-cni-161621" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.84466903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.847922778Z" level=info msg="Running pod sandbox: kube-system/kindnet-49wbb/POD" id=b05acf7f-95f8-4abe-84fe-6cfd527c9bb3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.847978613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.85426726Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b05acf7f-95f8-4abe-84fe-6cfd527c9bb3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.857423115Z" level=info msg="Ran pod sandbox bee316cc90992e74dee42e3fecd4ee6a520fbd0cb5e9643883841d052ec73b5f with infra container: kube-system/kindnet-49wbb/POD" id=b05acf7f-95f8-4abe-84fe-6cfd527c9bb3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.859773011Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e192a8d7-28a6-48f4-8c9f-afe5b1f3bfe8 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.862150113Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=dbd9b6f3-0fae-48e5-aabd-8309e23cf456 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.874581319Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1e97187a-57ba-461b-bb0c-fb2e738ca644 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.878664879Z" level=info msg="Ran pod sandbox fdf2281f7d215f68aa2f8a8d4167c884e0bdb528daf30feddd8d30814ad395f2 with infra container: kube-system/kube-proxy-dgplp/POD" id=dbd9b6f3-0fae-48e5-aabd-8309e23cf456 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.881755079Z" level=info msg="Creating container: kube-system/kindnet-49wbb/kindnet-cni" id=ae67a082-15da-4578-81ae-b3b7fe45b6e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.882188717Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.884944944Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=217c41f3-54d2-48fa-b244-c423793ac91f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.892874549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.895455634Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9a8e07bc-a070-4c67-a21e-f8e355cdf3d3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.902819117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.905410967Z" level=info msg="Creating container: kube-system/kube-proxy-dgplp/kube-proxy" id=794a9561-a7da-4b53-8481-2dddaf344788 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.905755532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.922144213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.923036013Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.9742545Z" level=info msg="Created container 9f46faf037e3b26311d57bbd2403f4ba97123b93d5abf348c16012b46b80b4f1: kube-system/kindnet-49wbb/kindnet-cni" id=ae67a082-15da-4578-81ae-b3b7fe45b6e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.975131104Z" level=info msg="Starting container: 9f46faf037e3b26311d57bbd2403f4ba97123b93d5abf348c16012b46b80b4f1" id=78122307-4ea3-482d-bac6-c852005f307a name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.977158588Z" level=info msg="Started container" PID=1059 containerID=9f46faf037e3b26311d57bbd2403f4ba97123b93d5abf348c16012b46b80b4f1 description=kube-system/kindnet-49wbb/kindnet-cni id=78122307-4ea3-482d-bac6-c852005f307a name=/runtime.v1.RuntimeService/StartContainer sandboxID=bee316cc90992e74dee42e3fecd4ee6a520fbd0cb5e9643883841d052ec73b5f
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.980234552Z" level=info msg="Created container d39f9d09819ecc40f2d571e3d5fcfc97fcae9fbf57a5dd00ecd9cee62125d1a8: kube-system/kube-proxy-dgplp/kube-proxy" id=794a9561-a7da-4b53-8481-2dddaf344788 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.980883569Z" level=info msg="Starting container: d39f9d09819ecc40f2d571e3d5fcfc97fcae9fbf57a5dd00ecd9cee62125d1a8" id=8fb65bc9-9b19-49e2-a682-79c2becc3029 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.985208772Z" level=info msg="Started container" PID=1063 containerID=d39f9d09819ecc40f2d571e3d5fcfc97fcae9fbf57a5dd00ecd9cee62125d1a8 description=kube-system/kube-proxy-dgplp/kube-proxy id=8fb65bc9-9b19-49e2-a682-79c2becc3029 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fdf2281f7d215f68aa2f8a8d4167c884e0bdb528daf30feddd8d30814ad395f2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d39f9d09819ec       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                1                   fdf2281f7d215       kube-proxy-dgplp                            kube-system
	9f46faf037e3b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 seconds ago       Running             kindnet-cni               1                   bee316cc90992       kindnet-49wbb                               kube-system
	c40c6f2c67087       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            1                   344c97851088d       kube-apiserver-newest-cni-161621            kube-system
	b078018388b12       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   10 seconds ago      Running             kube-scheduler            1                   3406375dcc296       kube-scheduler-newest-cni-161621            kube-system
	75092104da292       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   1                   4d3fa71007a0d       kube-controller-manager-newest-cni-161621   kube-system
	2a823c2b3de35       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      1                   a230f341b5bc6       etcd-newest-cni-161621                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-161621
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-161621
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=newest-cni-161621
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_57_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:57:02 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-161621
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:57:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:57:33 +0000   Thu, 02 Oct 2025 21:56:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:57:33 +0000   Thu, 02 Oct 2025 21:56:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:57:33 +0000   Thu, 02 Oct 2025 21:56:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 02 Oct 2025 21:57:33 +0000   Thu, 02 Oct 2025 21:56:54 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-161621
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 7a2812ce78fc428e93e7b6e890f7c160
	  System UUID:                b330f332-73e4-4f26-be0f-56d7dcddd7b6
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-161621                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-49wbb                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-161621             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-161621    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-dgplp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-161621             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   Starting                 3s                 kube-proxy       
	  Warning  CgroupV1                 46s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node newest-cni-161621 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node newest-cni-161621 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node newest-cni-161621 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-161621 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-161621 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-161621 status is now: NodeHasSufficientPID
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           28s                node-controller  Node newest-cni-161621 event: Registered Node newest-cni-161621 in Controller
	  Normal   Starting                 12s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-161621 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-161621 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-161621 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-161621 event: Registered Node newest-cni-161621 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +39.734560] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[  +6.128952] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[  +5.098616] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:55] overlayfs: idmapped layers are currently not supported
	[  +9.226554] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[ +27.661855] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2a823c2b3de352016dd9f4c479498ee6603ea9699523f15b7819586a24c96d1c] <==
	{"level":"warn","ts":"2025-10-02T21:57:31.893731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:31.942212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:31.972913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.007571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.020273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.059273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.101715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.131536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.151033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.190374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.217796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.251495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.274375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.306711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.327000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.347070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.366434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.388074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.423511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.451916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.490156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.513667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.536761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.549468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.610106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37742","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:57:39 up  6:39,  0 user,  load average: 6.59, 4.31, 2.70
	Linux newest-cni-161621 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9f46faf037e3b26311d57bbd2403f4ba97123b93d5abf348c16012b46b80b4f1] <==
	I1002 21:57:35.118517       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:57:35.118804       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 21:57:35.118913       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:57:35.118925       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:57:35.118937       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:57:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:57:35.311401       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:57:35.311468       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:57:35.311506       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:57:35.312406       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [c40c6f2c6708756dcd9e411f67e0ef77b2a41cd885fdc94a2b014b396c107329] <==
	I1002 21:57:33.794436       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 21:57:33.794469       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:57:33.820231       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 21:57:33.820259       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 21:57:33.820439       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 21:57:33.826269       1 aggregator.go:171] initial CRD sync complete...
	I1002 21:57:33.826290       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 21:57:33.826298       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 21:57:33.826306       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:57:33.862778       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 21:57:33.862810       1 policy_source.go:240] refreshing policies
	I1002 21:57:33.862892       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 21:57:33.871740       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1002 21:57:33.941720       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 21:57:34.392747       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:57:34.588926       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 21:57:34.602899       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:57:34.822492       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:57:34.943883       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:57:34.961721       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:57:35.085128       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.158.195"}
	I1002 21:57:35.120692       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.47.124"}
	I1002 21:57:37.368706       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 21:57:37.416947       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:57:37.814122       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [75092104da2923b9ef504bb95a2cf821338c0a5b4f18a2923b106161a85c4d68] <==
	I1002 21:57:37.359459       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 21:57:37.363882       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 21:57:37.366201       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 21:57:37.371183       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 21:57:37.372339       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 21:57:37.376125       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 21:57:37.376414       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 21:57:37.376486       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 21:57:37.376524       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 21:57:37.376559       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 21:57:37.376219       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 21:57:37.376367       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 21:57:37.378857       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 21:57:37.378957       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 21:57:37.393525       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 21:57:37.393697       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 21:57:37.394206       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-161621"
	I1002 21:57:37.394303       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 21:57:37.400288       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 21:57:37.408171       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 21:57:37.408353       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 21:57:37.417974       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:57:37.418063       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:57:37.418098       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:57:37.418217       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [d39f9d09819ecc40f2d571e3d5fcfc97fcae9fbf57a5dd00ecd9cee62125d1a8] <==
	I1002 21:57:35.218875       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:57:35.309215       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:57:35.410887       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:57:35.411016       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 21:57:35.411155       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:57:35.429273       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:57:35.429324       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:57:35.433287       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:57:35.433631       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:57:35.433691       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:57:35.434988       1 config.go:200] "Starting service config controller"
	I1002 21:57:35.435050       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:57:35.435095       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:57:35.435123       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:57:35.435158       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:57:35.435184       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:57:35.435850       1 config.go:309] "Starting node config controller"
	I1002 21:57:35.435907       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:57:35.435938       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:57:35.535511       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:57:35.535515       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:57:35.535548       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b078018388b1265576c4528a0a0db92ebf50836da58a04935c814d0163d732b8] <==
	I1002 21:57:32.465320       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:57:34.625545       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:57:34.633174       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:57:34.676306       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:57:34.677164       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 21:57:34.677238       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 21:57:34.677293       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:57:34.685532       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:57:34.693018       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:57:34.690141       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:57:34.693286       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:57:34.778815       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 21:57:34.794096       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:57:34.795434       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 21:57:32 newest-cni-161621 kubelet[729]: E1002 21:57:32.547603     729 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-161621\" not found" node="newest-cni-161621"
	Oct 02 21:57:33 newest-cni-161621 kubelet[729]: I1002 21:57:33.941806     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-161621"
	Oct 02 21:57:33 newest-cni-161621 kubelet[729]: I1002 21:57:33.942914     729 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-161621"
	Oct 02 21:57:33 newest-cni-161621 kubelet[729]: I1002 21:57:33.942997     729 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-161621"
	Oct 02 21:57:33 newest-cni-161621 kubelet[729]: I1002 21:57:33.943026     729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 02 21:57:33 newest-cni-161621 kubelet[729]: I1002 21:57:33.945524     729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 02 21:57:33 newest-cni-161621 kubelet[729]: E1002 21:57:33.995923     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-161621\" already exists" pod="kube-system/etcd-newest-cni-161621"
	Oct 02 21:57:33 newest-cni-161621 kubelet[729]: I1002 21:57:33.995959     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-161621"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: E1002 21:57:34.025171     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-161621\" already exists" pod="kube-system/kube-apiserver-newest-cni-161621"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.025209     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-161621"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: E1002 21:57:34.069472     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-161621\" already exists" pod="kube-system/kube-controller-manager-newest-cni-161621"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.069522     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-161621"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: E1002 21:57:34.090532     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-161621\" already exists" pod="kube-system/kube-scheduler-newest-cni-161621"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.525386     729 apiserver.go:52] "Watching apiserver"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.546399     729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.582499     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7efc7ea2-18d3-49a1-a3eb-1c2767978396-cni-cfg\") pod \"kindnet-49wbb\" (UID: \"7efc7ea2-18d3-49a1-a3eb-1c2767978396\") " pod="kube-system/kindnet-49wbb"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.582544     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7efc7ea2-18d3-49a1-a3eb-1c2767978396-xtables-lock\") pod \"kindnet-49wbb\" (UID: \"7efc7ea2-18d3-49a1-a3eb-1c2767978396\") " pod="kube-system/kindnet-49wbb"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.582608     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7efc7ea2-18d3-49a1-a3eb-1c2767978396-lib-modules\") pod \"kindnet-49wbb\" (UID: \"7efc7ea2-18d3-49a1-a3eb-1c2767978396\") " pod="kube-system/kindnet-49wbb"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.582650     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7a0b577-70f7-44ba-8990-3694f9fcc965-xtables-lock\") pod \"kube-proxy-dgplp\" (UID: \"b7a0b577-70f7-44ba-8990-3694f9fcc965\") " pod="kube-system/kube-proxy-dgplp"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.582668     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7a0b577-70f7-44ba-8990-3694f9fcc965-lib-modules\") pod \"kube-proxy-dgplp\" (UID: \"b7a0b577-70f7-44ba-8990-3694f9fcc965\") " pod="kube-system/kube-proxy-dgplp"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.618711     729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: W1002 21:57:34.876202     729 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad/crio-fdf2281f7d215f68aa2f8a8d4167c884e0bdb528daf30feddd8d30814ad395f2 WatchSource:0}: Error finding container fdf2281f7d215f68aa2f8a8d4167c884e0bdb528daf30feddd8d30814ad395f2: Status 404 returned error can't find the container with id fdf2281f7d215f68aa2f8a8d4167c884e0bdb528daf30feddd8d30814ad395f2
	Oct 02 21:57:36 newest-cni-161621 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 21:57:36 newest-cni-161621 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 21:57:36 newest-cni-161621 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-161621 -n newest-cni-161621
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-161621 -n newest-cni-161621: exit status 2 (354.093017ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-161621 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-ntw6h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xwlp9 kubernetes-dashboard-855c9754f9-426rg
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-161621 describe pod coredns-66bc5c9577-ntw6h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xwlp9 kubernetes-dashboard-855c9754f9-426rg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-161621 describe pod coredns-66bc5c9577-ntw6h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xwlp9 kubernetes-dashboard-855c9754f9-426rg: exit status 1 (86.810963ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-ntw6h" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-xwlp9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-426rg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-161621 describe pod coredns-66bc5c9577-ntw6h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xwlp9 kubernetes-dashboard-855c9754f9-426rg: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-161621
helpers_test.go:243: (dbg) docker inspect newest-cni-161621:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad",
	        "Created": "2025-10-02T21:56:36.536154593Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1209470,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:57:20.065464254Z",
	            "FinishedAt": "2025-10-02T21:57:18.897759227Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad/hostname",
	        "HostsPath": "/var/lib/docker/containers/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad/hosts",
	        "LogPath": "/var/lib/docker/containers/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad-json.log",
	        "Name": "/newest-cni-161621",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-161621:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-161621",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad",
	                "LowerDir": "/var/lib/docker/overlay2/82468ade2c550bcf18f9beb969d80c75d06c3a36df17c029f9f82c0a2a5dab59-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/82468ade2c550bcf18f9beb969d80c75d06c3a36df17c029f9f82c0a2a5dab59/merged",
	                "UpperDir": "/var/lib/docker/overlay2/82468ade2c550bcf18f9beb969d80c75d06c3a36df17c029f9f82c0a2a5dab59/diff",
	                "WorkDir": "/var/lib/docker/overlay2/82468ade2c550bcf18f9beb969d80c75d06c3a36df17c029f9f82c0a2a5dab59/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-161621",
	                "Source": "/var/lib/docker/volumes/newest-cni-161621/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-161621",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-161621",
	                "name.minikube.sigs.k8s.io": "newest-cni-161621",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b15cb14e879923d587263a5ee23238d096e7465c585bde759ec7f18c81c30082",
	            "SandboxKey": "/var/run/docker/netns/b15cb14e8799",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34226"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34227"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34230"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34228"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34229"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-161621": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:74:16:aa:94:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "155fa7e09312067d402503432852b0577257ec8e8857e0c34bee66c5c9279cb6",
	                    "EndpointID": "b354b3c63139c3e3dfb12995ed19c323e6127233889d42a7b4b4406a045e61dd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-161621",
	                        "4274608d314f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-161621 -n newest-cni-161621
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-161621 -n newest-cni-161621: exit status 2 (362.552422ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-161621 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-161621 logs -n 25: (1.095304047s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-661954 image list --format=json                                                                                                                                                                                                    │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ pause   │ -p no-preload-661954 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-132977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │                     │
	│ delete  │ -p no-preload-661954                                                                                                                                                                                                                          │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ stop    │ -p embed-certs-132977 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:55 UTC │
	│ delete  │ -p no-preload-661954                                                                                                                                                                                                                          │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ delete  │ -p disable-driver-mounts-013352                                                                                                                                                                                                               │ disable-driver-mounts-013352 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ start   │ -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-132977 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:55 UTC │
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:56 UTC │
	│ image   │ embed-certs-132977 image list --format=json                                                                                                                                                                                                   │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ pause   │ -p embed-certs-132977 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	│ delete  │ -p embed-certs-132977                                                                                                                                                                                                                         │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ delete  │ -p embed-certs-132977                                                                                                                                                                                                                         │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ start   │ -p newest-cni-161621 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-842185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-842185 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-842185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ start   │ -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-161621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │                     │
	│ stop    │ -p newest-cni-161621 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ addons  │ enable dashboard -p newest-cni-161621 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ start   │ -p newest-cni-161621 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ image   │ newest-cni-161621 image list --format=json                                                                                                                                                                                                    │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ pause   │ -p newest-cni-161621 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:57:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:57:19.653367 1209333 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:57:19.653488 1209333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:57:19.653554 1209333 out.go:374] Setting ErrFile to fd 2...
	I1002 21:57:19.653559 1209333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:57:19.653814 1209333 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:57:19.654396 1209333 out.go:368] Setting JSON to false
	I1002 21:57:19.655392 1209333 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23977,"bootTime":1759418263,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:57:19.655465 1209333 start.go:140] virtualization:  
	I1002 21:57:19.660113 1209333 out.go:179] * [newest-cni-161621] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:57:19.664323 1209333 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:57:19.664438 1209333 notify.go:221] Checking for updates...
	I1002 21:57:19.671834 1209333 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:57:19.674920 1209333 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:57:19.678355 1209333 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:57:19.681247 1209333 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:57:19.684471 1209333 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:57:19.688960 1209333 config.go:182] Loaded profile config "newest-cni-161621": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:57:19.689527 1209333 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:57:19.722779 1209333 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:57:19.722899 1209333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:57:19.838427 1209333 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:57:19.827278511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:57:19.838534 1209333 docker.go:319] overlay module found
	I1002 21:57:19.843798 1209333 out.go:179] * Using the docker driver based on existing profile
	I1002 21:57:19.847179 1209333 start.go:306] selected driver: docker
	I1002 21:57:19.847203 1209333 start.go:936] validating driver "docker" against &{Name:newest-cni-161621 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-161621 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:57:19.847308 1209333 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:57:19.847999 1209333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:57:19.945710 1209333 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 21:57:19.935291017 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:57:19.946170 1209333 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 21:57:19.946199 1209333 cni.go:84] Creating CNI manager for ""
	I1002 21:57:19.946257 1209333 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:57:19.946296 1209333 start.go:350] cluster config:
	{Name:newest-cni-161621 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-161621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:57:19.950114 1209333 out.go:179] * Starting "newest-cni-161621" primary control-plane node in "newest-cni-161621" cluster
	I1002 21:57:19.953231 1209333 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:57:19.956301 1209333 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:57:19.959302 1209333 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:57:19.959362 1209333 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:57:19.959377 1209333 cache.go:59] Caching tarball of preloaded images
	I1002 21:57:19.959402 1209333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:57:19.959469 1209333 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:57:19.959480 1209333 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:57:19.959604 1209333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/config.json ...
	I1002 21:57:19.983649 1209333 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:57:19.983670 1209333 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:57:19.983683 1209333 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:57:19.983705 1209333 start.go:361] acquireMachinesLock for newest-cni-161621: {Name:mk369c5d3d45aed0e984b21d641c17abd7d1dc57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:57:19.983756 1209333 start.go:365] duration metric: took 35.437µs to acquireMachinesLock for "newest-cni-161621"
	I1002 21:57:19.983776 1209333 start.go:97] Skipping create...Using existing machine configuration
	I1002 21:57:19.983780 1209333 fix.go:55] fixHost starting: 
	I1002 21:57:19.984037 1209333 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:20.006194 1209333 fix.go:113] recreateIfNeeded on newest-cni-161621: state=Stopped err=<nil>
	W1002 21:57:20.006227 1209333 fix.go:139] unexpected machine state, will restart: <nil>
	W1002 21:57:17.181820 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	W1002 21:57:19.676795 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	I1002 21:57:20.013780 1209333 out.go:252] * Restarting existing docker container for "newest-cni-161621" ...
	I1002 21:57:20.013904 1209333 cli_runner.go:164] Run: docker start newest-cni-161621
	I1002 21:57:20.351563 1209333 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:20.382146 1209333 kic.go:430] container "newest-cni-161621" state is running.
	I1002 21:57:20.382506 1209333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-161621
	I1002 21:57:20.404551 1209333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/config.json ...
	I1002 21:57:20.404771 1209333 machine.go:93] provisionDockerMachine start ...
	I1002 21:57:20.404846 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:20.429042 1209333 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:20.429370 1209333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34226 <nil> <nil>}
	I1002 21:57:20.429383 1209333 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:57:20.430187 1209333 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 21:57:23.570579 1209333 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-161621
	
	I1002 21:57:23.570662 1209333 ubuntu.go:182] provisioning hostname "newest-cni-161621"
	I1002 21:57:23.570773 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:23.591837 1209333 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:23.592146 1209333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34226 <nil> <nil>}
	I1002 21:57:23.592158 1209333 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-161621 && echo "newest-cni-161621" | sudo tee /etc/hostname
	I1002 21:57:23.751647 1209333 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-161621
	
	I1002 21:57:23.751729 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:23.775958 1209333 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:23.776275 1209333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34226 <nil> <nil>}
	I1002 21:57:23.776298 1209333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-161621' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-161621/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-161621' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:57:23.930655 1209333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:57:23.930690 1209333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:57:23.930713 1209333 ubuntu.go:190] setting up certificates
	I1002 21:57:23.930726 1209333 provision.go:84] configureAuth start
	I1002 21:57:23.930812 1209333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-161621
	I1002 21:57:23.959908 1209333 provision.go:143] copyHostCerts
	I1002 21:57:23.959979 1209333 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:57:23.960004 1209333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:57:23.960079 1209333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:57:23.960185 1209333 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:57:23.960196 1209333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:57:23.960224 1209333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:57:23.960293 1209333 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:57:23.960302 1209333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:57:23.960327 1209333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:57:23.960386 1209333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.newest-cni-161621 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-161621]
	I1002 21:57:24.059563 1209333 provision.go:177] copyRemoteCerts
	I1002 21:57:24.059656 1209333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:57:24.059712 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:24.082022 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:24.183403 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:57:24.202881 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 21:57:24.222656 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:57:24.246489 1209333 provision.go:87] duration metric: took 315.742704ms to configureAuth
	I1002 21:57:24.246558 1209333 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:57:24.246786 1209333 config.go:182] Loaded profile config "newest-cni-161621": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:57:24.246929 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:24.268483 1209333 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:24.268796 1209333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34226 <nil> <nil>}
	I1002 21:57:24.268811 1209333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:57:24.585325 1209333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:57:24.585391 1209333 machine.go:96] duration metric: took 4.180601981s to provisionDockerMachine
	I1002 21:57:24.585432 1209333 start.go:294] postStartSetup for "newest-cni-161621" (driver="docker")
	I1002 21:57:24.585472 1209333 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:57:24.585551 1209333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:57:24.585620 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:24.605521 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	W1002 21:57:22.164945 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	W1002 21:57:24.165283 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	I1002 21:57:24.716111 1209333 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:57:24.720316 1209333 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:57:24.720343 1209333 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:57:24.720355 1209333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:57:24.720406 1209333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:57:24.720480 1209333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:57:24.720589 1209333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:57:24.729758 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:57:24.751205 1209333 start.go:297] duration metric: took 165.729546ms for postStartSetup
	I1002 21:57:24.751360 1209333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:57:24.751423 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:24.772318 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:24.867971 1209333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:57:24.873679 1209333 fix.go:57] duration metric: took 4.889890646s for fixHost
	I1002 21:57:24.873708 1209333 start.go:84] releasing machines lock for "newest-cni-161621", held for 4.889943149s
	I1002 21:57:24.873785 1209333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-161621
	I1002 21:57:24.896796 1209333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:57:24.896871 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:24.896980 1209333 ssh_runner.go:195] Run: cat /version.json
	I1002 21:57:24.897018 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:24.934577 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:24.938796 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:25.050112 1209333 ssh_runner.go:195] Run: systemctl --version
	I1002 21:57:25.149339 1209333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:57:25.211052 1209333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:57:25.219040 1209333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:57:25.219152 1209333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:57:25.230078 1209333 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:57:25.230106 1209333 start.go:496] detecting cgroup driver to use...
	I1002 21:57:25.230138 1209333 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:57:25.230199 1209333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:57:25.247868 1209333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:57:25.262471 1209333 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:57:25.262545 1209333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:57:25.279966 1209333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:57:25.294853 1209333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:57:25.440394 1209333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:57:25.599617 1209333 docker.go:234] disabling docker service ...
	I1002 21:57:25.599721 1209333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:57:25.615660 1209333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:57:25.629297 1209333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:57:25.791842 1209333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:57:25.940498 1209333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:57:25.956033 1209333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:57:25.971393 1209333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:57:25.971471 1209333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:25.981072 1209333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:57:25.981155 1209333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:25.991192 1209333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:26.001384 1209333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:26.012425 1209333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:57:26.022089 1209333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:26.032743 1209333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:26.042370 1209333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:26.052553 1209333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:57:26.061521 1209333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:57:26.070605 1209333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:57:26.219123 1209333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:57:26.813533 1209333 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:57:26.813608 1209333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:57:26.824287 1209333 start.go:564] Will wait 60s for crictl version
	I1002 21:57:26.824421 1209333 ssh_runner.go:195] Run: which crictl
	I1002 21:57:26.831959 1209333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:57:26.867066 1209333 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:57:26.867238 1209333 ssh_runner.go:195] Run: crio --version
	I1002 21:57:26.904395 1209333 ssh_runner.go:195] Run: crio --version
	I1002 21:57:26.944167 1209333 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:57:26.947245 1209333 cli_runner.go:164] Run: docker network inspect newest-cni-161621 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:57:26.965235 1209333 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 21:57:26.969259 1209333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:57:26.982932 1209333 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1002 21:57:26.985688 1209333 kubeadm.go:883] updating cluster {Name:newest-cni-161621 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-161621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:57:26.985825 1209333 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:57:26.985913 1209333 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:57:27.023959 1209333 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:57:27.023984 1209333 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:57:27.024041 1209333 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:57:27.056404 1209333 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:57:27.056430 1209333 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:57:27.056438 1209333 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 21:57:27.056548 1209333 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-161621 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-161621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:57:27.056644 1209333 ssh_runner.go:195] Run: crio config
	I1002 21:57:27.155301 1209333 cni.go:84] Creating CNI manager for ""
	I1002 21:57:27.155328 1209333 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:57:27.155346 1209333 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1002 21:57:27.155370 1209333 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-161621 NodeName:newest-cni-161621 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:57:27.155493 1209333 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-161621"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:57:27.155566 1209333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:57:27.166406 1209333 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:57:27.166481 1209333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:57:27.175543 1209333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 21:57:27.192116 1209333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:57:27.205485 1209333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1002 21:57:27.219751 1209333 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:57:27.223557 1209333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:57:27.233778 1209333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:57:27.386691 1209333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:57:27.407135 1209333 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621 for IP: 192.168.76.2
	I1002 21:57:27.407216 1209333 certs.go:195] generating shared ca certs ...
	I1002 21:57:27.407249 1209333 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:27.407459 1209333 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:57:27.407535 1209333 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:57:27.407589 1209333 certs.go:257] generating profile certs ...
	I1002 21:57:27.407717 1209333 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/client.key
	I1002 21:57:27.407835 1209333 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/apiserver.key.7bf184af
	I1002 21:57:27.407924 1209333 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/proxy-client.key
	I1002 21:57:27.408090 1209333 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:57:27.408148 1209333 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:57:27.408172 1209333 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:57:27.408235 1209333 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:57:27.408296 1209333 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:57:27.408358 1209333 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:57:27.408453 1209333 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:57:27.409185 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:57:27.456051 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:57:27.483166 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:57:27.504811 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:57:27.527685 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:57:27.574337 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:57:27.608570 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:57:27.634974 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/newest-cni-161621/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:57:27.686062 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:57:27.714711 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:57:27.740103 1209333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:57:27.777235 1209333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:57:27.796597 1209333 ssh_runner.go:195] Run: openssl version
	I1002 21:57:27.803776 1209333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:57:27.821073 1209333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:57:27.825212 1209333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:57:27.825323 1209333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:57:27.872742 1209333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:57:27.883690 1209333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:57:27.893317 1209333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:57:27.897303 1209333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:57:27.897412 1209333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:57:27.941169 1209333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:57:27.950613 1209333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:57:27.959170 1209333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:57:27.964193 1209333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:57:27.964307 1209333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:57:28.014816 1209333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:57:28.024478 1209333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:57:28.029187 1209333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:57:28.086868 1209333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:57:28.180141 1209333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:57:28.251079 1209333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:57:28.360569 1209333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:57:28.441450 1209333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:57:28.558624 1209333 kubeadm.go:400] StartCluster: {Name:newest-cni-161621 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-161621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:57:28.558736 1209333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:57:28.558808 1209333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:57:28.635257 1209333 cri.go:89] found id: "c40c6f2c6708756dcd9e411f67e0ef77b2a41cd885fdc94a2b014b396c107329"
	I1002 21:57:28.635327 1209333 cri.go:89] found id: "b078018388b1265576c4528a0a0db92ebf50836da58a04935c814d0163d732b8"
	I1002 21:57:28.635345 1209333 cri.go:89] found id: "75092104da2923b9ef504bb95a2cf821338c0a5b4f18a2923b106161a85c4d68"
	I1002 21:57:28.635365 1209333 cri.go:89] found id: "2a823c2b3de352016dd9f4c479498ee6603ea9699523f15b7819586a24c96d1c"
	I1002 21:57:28.635397 1209333 cri.go:89] found id: ""
	I1002 21:57:28.635468 1209333 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 21:57:28.685462 1209333 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:57:28Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:57:28.685590 1209333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:57:28.726552 1209333 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:57:28.726584 1209333 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:57:28.726636 1209333 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:57:28.771461 1209333 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:57:28.772050 1209333 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-161621" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:57:28.772310 1209333 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-992084/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-161621" cluster setting kubeconfig missing "newest-cni-161621" context setting]
	I1002 21:57:28.772806 1209333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:28.774404 1209333 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:57:28.792157 1209333 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 21:57:28.792232 1209333 kubeadm.go:601] duration metric: took 65.640499ms to restartPrimaryControlPlane
	I1002 21:57:28.792256 1209333 kubeadm.go:402] duration metric: took 233.641287ms to StartCluster
	I1002 21:57:28.792302 1209333 settings.go:142] acquiring lock: {Name:mke760b8aff031924c771ee335d67a500cc9f9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:28.792383 1209333 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:57:28.793316 1209333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/kubeconfig: {Name:mkf15d8a75c36d07fb15f8fcd6dece7db422c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:28.793596 1209333 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:57:28.793936 1209333 config.go:182] Loaded profile config "newest-cni-161621": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:57:28.794017 1209333 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:57:28.794370 1209333 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-161621"
	I1002 21:57:28.794433 1209333 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-161621"
	I1002 21:57:28.794395 1209333 addons.go:69] Setting dashboard=true in profile "newest-cni-161621"
	I1002 21:57:28.794484 1209333 addons.go:238] Setting addon dashboard=true in "newest-cni-161621"
	W1002 21:57:28.794492 1209333 addons.go:247] addon dashboard should already be in state true
	W1002 21:57:28.794511 1209333 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:57:28.794556 1209333 host.go:66] Checking if "newest-cni-161621" exists ...
	I1002 21:57:28.794516 1209333 host.go:66] Checking if "newest-cni-161621" exists ...
	I1002 21:57:28.795169 1209333 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:28.795273 1209333 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:28.794402 1209333 addons.go:69] Setting default-storageclass=true in profile "newest-cni-161621"
	I1002 21:57:28.795661 1209333 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-161621"
	I1002 21:57:28.795931 1209333 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:28.799274 1209333 out.go:179] * Verifying Kubernetes components...
	I1002 21:57:28.808296 1209333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:57:28.849020 1209333 addons.go:238] Setting addon default-storageclass=true in "newest-cni-161621"
	W1002 21:57:28.849044 1209333 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:57:28.849068 1209333 host.go:66] Checking if "newest-cni-161621" exists ...
	I1002 21:57:28.849487 1209333 cli_runner.go:164] Run: docker container inspect newest-cni-161621 --format={{.State.Status}}
	I1002 21:57:28.854885 1209333 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 21:57:28.860936 1209333 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 21:57:28.863906 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 21:57:28.863929 1209333 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 21:57:28.863995 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:28.875924 1209333 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:57:28.883047 1209333 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:57:28.883077 1209333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:57:28.883152 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:28.898231 1209333 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:57:28.898255 1209333 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:57:28.898323 1209333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-161621
	I1002 21:57:28.934554 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:28.957350 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:28.958953 1209333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34226 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/newest-cni-161621/id_rsa Username:docker}
	I1002 21:57:29.243168 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 21:57:29.243196 1209333 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 21:57:29.271414 1209333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:57:29.334354 1209333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:57:29.343994 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 21:57:29.344021 1209333 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 21:57:29.351054 1209333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:57:29.385363 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 21:57:29.385385 1209333 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 21:57:29.437260 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 21:57:29.437325 1209333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 21:57:29.470442 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 21:57:29.470505 1209333 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 21:57:29.529127 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 21:57:29.529154 1209333 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 21:57:29.579682 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 21:57:29.579709 1209333 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 21:57:29.606878 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 21:57:29.606944 1209333 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 21:57:29.635558 1209333 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:57:29.635631 1209333 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W1002 21:57:26.664463 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	W1002 21:57:28.671944 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	I1002 21:57:29.655436 1209333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 21:57:35.176139 1209333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.904685909s)
	I1002 21:57:35.176180 1209333 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.841801292s)
	I1002 21:57:35.176213 1209333 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:57:35.176269 1209333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:57:35.176328 1209333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.825250145s)
	I1002 21:57:35.176676 1209333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.521157589s)
	I1002 21:57:35.179837 1209333 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-161621 addons enable metrics-server
	
	I1002 21:57:35.199497 1209333 api_server.go:72] duration metric: took 6.405845331s to wait for apiserver process to appear ...
	I1002 21:57:35.199518 1209333 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:57:35.199535 1209333 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:57:35.213708 1209333 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:57:35.213786 1209333 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:57:35.219509 1209333 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1002 21:57:31.164433 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	W1002 21:57:33.165149 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	I1002 21:57:35.222452 1209333 addons.go:514] duration metric: took 6.428431856s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1002 21:57:35.699806 1209333 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 21:57:35.708531 1209333 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 21:57:35.709565 1209333 api_server.go:141] control plane version: v1.34.1
	I1002 21:57:35.709592 1209333 api_server.go:131] duration metric: took 510.066782ms to wait for apiserver health ...
	I1002 21:57:35.709601 1209333 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:57:35.713005 1209333 system_pods.go:59] 8 kube-system pods found
	I1002 21:57:35.713042 1209333 system_pods.go:61] "coredns-66bc5c9577-ntw6h" [970083c8-716b-436e-bc93-a888bb56d5d7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 21:57:35.713052 1209333 system_pods.go:61] "etcd-newest-cni-161621" [7ea80e5a-23cd-483a-915a-1e4d15b007d4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:57:35.713060 1209333 system_pods.go:61] "kindnet-49wbb" [7efc7ea2-18d3-49a1-a3eb-1c2767978396] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 21:57:35.713068 1209333 system_pods.go:61] "kube-apiserver-newest-cni-161621" [d07ea3be-9874-47c6-ba52-41218649668a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:57:35.713080 1209333 system_pods.go:61] "kube-controller-manager-newest-cni-161621" [7ae67be3-2685-4cae-8d49-9afaee6a47d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:57:35.713091 1209333 system_pods.go:61] "kube-proxy-dgplp" [b7a0b577-70f7-44ba-8990-3694f9fcc965] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 21:57:35.713106 1209333 system_pods.go:61] "kube-scheduler-newest-cni-161621" [1e8d8f40-f980-4e6d-b889-213f346a7488] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:57:35.713112 1209333 system_pods.go:61] "storage-provisioner" [410ea72e-053b-4627-bd9d-8182cc23f02a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 21:57:35.713118 1209333 system_pods.go:74] duration metric: took 3.511793ms to wait for pod list to return data ...
	I1002 21:57:35.713130 1209333 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:57:35.715612 1209333 default_sa.go:45] found service account: "default"
	I1002 21:57:35.715637 1209333 default_sa.go:55] duration metric: took 2.500702ms for default service account to be created ...
	I1002 21:57:35.715650 1209333 kubeadm.go:586] duration metric: took 6.922002841s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 21:57:35.715692 1209333 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:57:35.718867 1209333 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:57:35.718939 1209333 node_conditions.go:123] node cpu capacity is 2
	I1002 21:57:35.718967 1209333 node_conditions.go:105] duration metric: took 3.268492ms to run NodePressure ...
	I1002 21:57:35.718992 1209333 start.go:242] waiting for startup goroutines ...
	I1002 21:57:35.719019 1209333 start.go:247] waiting for cluster config update ...
	I1002 21:57:35.719032 1209333 start.go:256] writing updated cluster config ...
	I1002 21:57:35.719341 1209333 ssh_runner.go:195] Run: rm -f paused
	I1002 21:57:35.799137 1209333 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:57:35.804147 1209333 out.go:179] * Done! kubectl is now configured to use "newest-cni-161621" cluster and "default" namespace by default
	W1002 21:57:35.665029 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	W1002 21:57:38.165288 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.84466903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.847922778Z" level=info msg="Running pod sandbox: kube-system/kindnet-49wbb/POD" id=b05acf7f-95f8-4abe-84fe-6cfd527c9bb3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.847978613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.85426726Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b05acf7f-95f8-4abe-84fe-6cfd527c9bb3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.857423115Z" level=info msg="Ran pod sandbox bee316cc90992e74dee42e3fecd4ee6a520fbd0cb5e9643883841d052ec73b5f with infra container: kube-system/kindnet-49wbb/POD" id=b05acf7f-95f8-4abe-84fe-6cfd527c9bb3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.859773011Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e192a8d7-28a6-48f4-8c9f-afe5b1f3bfe8 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.862150113Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=dbd9b6f3-0fae-48e5-aabd-8309e23cf456 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.874581319Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1e97187a-57ba-461b-bb0c-fb2e738ca644 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.878664879Z" level=info msg="Ran pod sandbox fdf2281f7d215f68aa2f8a8d4167c884e0bdb528daf30feddd8d30814ad395f2 with infra container: kube-system/kube-proxy-dgplp/POD" id=dbd9b6f3-0fae-48e5-aabd-8309e23cf456 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.881755079Z" level=info msg="Creating container: kube-system/kindnet-49wbb/kindnet-cni" id=ae67a082-15da-4578-81ae-b3b7fe45b6e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.882188717Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.884944944Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=217c41f3-54d2-48fa-b244-c423793ac91f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.892874549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.895455634Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9a8e07bc-a070-4c67-a21e-f8e355cdf3d3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.902819117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.905410967Z" level=info msg="Creating container: kube-system/kube-proxy-dgplp/kube-proxy" id=794a9561-a7da-4b53-8481-2dddaf344788 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.905755532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.922144213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.923036013Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.9742545Z" level=info msg="Created container 9f46faf037e3b26311d57bbd2403f4ba97123b93d5abf348c16012b46b80b4f1: kube-system/kindnet-49wbb/kindnet-cni" id=ae67a082-15da-4578-81ae-b3b7fe45b6e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.975131104Z" level=info msg="Starting container: 9f46faf037e3b26311d57bbd2403f4ba97123b93d5abf348c16012b46b80b4f1" id=78122307-4ea3-482d-bac6-c852005f307a name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.977158588Z" level=info msg="Started container" PID=1059 containerID=9f46faf037e3b26311d57bbd2403f4ba97123b93d5abf348c16012b46b80b4f1 description=kube-system/kindnet-49wbb/kindnet-cni id=78122307-4ea3-482d-bac6-c852005f307a name=/runtime.v1.RuntimeService/StartContainer sandboxID=bee316cc90992e74dee42e3fecd4ee6a520fbd0cb5e9643883841d052ec73b5f
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.980234552Z" level=info msg="Created container d39f9d09819ecc40f2d571e3d5fcfc97fcae9fbf57a5dd00ecd9cee62125d1a8: kube-system/kube-proxy-dgplp/kube-proxy" id=794a9561-a7da-4b53-8481-2dddaf344788 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.980883569Z" level=info msg="Starting container: d39f9d09819ecc40f2d571e3d5fcfc97fcae9fbf57a5dd00ecd9cee62125d1a8" id=8fb65bc9-9b19-49e2-a682-79c2becc3029 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:57:34 newest-cni-161621 crio[613]: time="2025-10-02T21:57:34.985208772Z" level=info msg="Started container" PID=1063 containerID=d39f9d09819ecc40f2d571e3d5fcfc97fcae9fbf57a5dd00ecd9cee62125d1a8 description=kube-system/kube-proxy-dgplp/kube-proxy id=8fb65bc9-9b19-49e2-a682-79c2becc3029 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fdf2281f7d215f68aa2f8a8d4167c884e0bdb528daf30feddd8d30814ad395f2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d39f9d09819ec       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   fdf2281f7d215       kube-proxy-dgplp                            kube-system
	9f46faf037e3b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   bee316cc90992       kindnet-49wbb                               kube-system
	c40c6f2c67087       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   344c97851088d       kube-apiserver-newest-cni-161621            kube-system
	b078018388b12       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   3406375dcc296       kube-scheduler-newest-cni-161621            kube-system
	75092104da292       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   4d3fa71007a0d       kube-controller-manager-newest-cni-161621   kube-system
	2a823c2b3de35       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   a230f341b5bc6       etcd-newest-cni-161621                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-161621
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-161621
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=newest-cni-161621
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_57_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:57:02 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-161621
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:57:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:57:33 +0000   Thu, 02 Oct 2025 21:56:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:57:33 +0000   Thu, 02 Oct 2025 21:56:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:57:33 +0000   Thu, 02 Oct 2025 21:56:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 02 Oct 2025 21:57:33 +0000   Thu, 02 Oct 2025 21:56:54 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-161621
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 7a2812ce78fc428e93e7b6e890f7c160
	  System UUID:                b330f332-73e4-4f26-be0f-56d7dcddd7b6
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-161621                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         38s
	  kube-system                 kindnet-49wbb                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-161621             250m (12%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-newest-cni-161621    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-dgplp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-161621             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Warning  CgroupV1                 48s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node newest-cni-161621 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node newest-cni-161621 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node newest-cni-161621 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node newest-cni-161621 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node newest-cni-161621 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node newest-cni-161621 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           30s                node-controller  Node newest-cni-161621 event: Registered Node newest-cni-161621 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-161621 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-161621 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-161621 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-161621 event: Registered Node newest-cni-161621 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:22] overlayfs: idmapped layers are currently not supported
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +39.734560] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[  +6.128952] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[  +5.098616] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:55] overlayfs: idmapped layers are currently not supported
	[  +9.226554] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[ +27.661855] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2a823c2b3de352016dd9f4c479498ee6603ea9699523f15b7819586a24c96d1c] <==
	{"level":"warn","ts":"2025-10-02T21:57:31.893731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:31.942212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:31.972913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.007571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.020273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.059273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.101715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.131536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.151033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.190374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.217796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.251495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.274375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.306711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.327000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.347070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.366434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.388074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.423511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.451916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.490156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.513667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.536761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.549468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:32.610106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37742","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:57:41 up  6:39,  0 user,  load average: 6.59, 4.31, 2.70
	Linux newest-cni-161621 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9f46faf037e3b26311d57bbd2403f4ba97123b93d5abf348c16012b46b80b4f1] <==
	I1002 21:57:35.118517       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:57:35.118804       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 21:57:35.118913       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:57:35.118925       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:57:35.118937       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:57:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:57:35.311401       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:57:35.311468       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:57:35.311506       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:57:35.312406       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [c40c6f2c6708756dcd9e411f67e0ef77b2a41cd885fdc94a2b014b396c107329] <==
	I1002 21:57:33.794436       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 21:57:33.794469       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:57:33.820231       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 21:57:33.820259       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 21:57:33.820439       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 21:57:33.826269       1 aggregator.go:171] initial CRD sync complete...
	I1002 21:57:33.826290       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 21:57:33.826298       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 21:57:33.826306       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:57:33.862778       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 21:57:33.862810       1 policy_source.go:240] refreshing policies
	I1002 21:57:33.862892       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 21:57:33.871740       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1002 21:57:33.941720       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 21:57:34.392747       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:57:34.588926       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 21:57:34.602899       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:57:34.822492       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:57:34.943883       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:57:34.961721       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:57:35.085128       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.158.195"}
	I1002 21:57:35.120692       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.47.124"}
	I1002 21:57:37.368706       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 21:57:37.416947       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:57:37.814122       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [75092104da2923b9ef504bb95a2cf821338c0a5b4f18a2923b106161a85c4d68] <==
	I1002 21:57:37.359459       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 21:57:37.363882       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 21:57:37.366201       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 21:57:37.371183       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 21:57:37.372339       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 21:57:37.376125       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 21:57:37.376414       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 21:57:37.376486       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 21:57:37.376524       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 21:57:37.376559       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 21:57:37.376219       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 21:57:37.376367       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 21:57:37.378857       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 21:57:37.378957       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 21:57:37.393525       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 21:57:37.393697       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 21:57:37.394206       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-161621"
	I1002 21:57:37.394303       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 21:57:37.400288       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 21:57:37.408171       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 21:57:37.408353       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 21:57:37.417974       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:57:37.418063       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:57:37.418098       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:57:37.418217       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [d39f9d09819ecc40f2d571e3d5fcfc97fcae9fbf57a5dd00ecd9cee62125d1a8] <==
	I1002 21:57:35.218875       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:57:35.309215       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:57:35.410887       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:57:35.411016       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 21:57:35.411155       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:57:35.429273       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:57:35.429324       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:57:35.433287       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:57:35.433631       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:57:35.433691       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:57:35.434988       1 config.go:200] "Starting service config controller"
	I1002 21:57:35.435050       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:57:35.435095       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:57:35.435123       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:57:35.435158       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:57:35.435184       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:57:35.435850       1 config.go:309] "Starting node config controller"
	I1002 21:57:35.435907       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:57:35.435938       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:57:35.535511       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:57:35.535515       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:57:35.535548       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b078018388b1265576c4528a0a0db92ebf50836da58a04935c814d0163d732b8] <==
	I1002 21:57:32.465320       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:57:34.625545       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:57:34.633174       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:57:34.676306       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:57:34.677164       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 21:57:34.677238       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 21:57:34.677293       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:57:34.685532       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:57:34.693018       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:57:34.690141       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:57:34.693286       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:57:34.778815       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 21:57:34.794096       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:57:34.795434       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 21:57:32 newest-cni-161621 kubelet[729]: E1002 21:57:32.547603     729 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-161621\" not found" node="newest-cni-161621"
	Oct 02 21:57:33 newest-cni-161621 kubelet[729]: I1002 21:57:33.941806     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-161621"
	Oct 02 21:57:33 newest-cni-161621 kubelet[729]: I1002 21:57:33.942914     729 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-161621"
	Oct 02 21:57:33 newest-cni-161621 kubelet[729]: I1002 21:57:33.942997     729 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-161621"
	Oct 02 21:57:33 newest-cni-161621 kubelet[729]: I1002 21:57:33.943026     729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 02 21:57:33 newest-cni-161621 kubelet[729]: I1002 21:57:33.945524     729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 02 21:57:33 newest-cni-161621 kubelet[729]: E1002 21:57:33.995923     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-161621\" already exists" pod="kube-system/etcd-newest-cni-161621"
	Oct 02 21:57:33 newest-cni-161621 kubelet[729]: I1002 21:57:33.995959     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-161621"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: E1002 21:57:34.025171     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-161621\" already exists" pod="kube-system/kube-apiserver-newest-cni-161621"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.025209     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-161621"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: E1002 21:57:34.069472     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-161621\" already exists" pod="kube-system/kube-controller-manager-newest-cni-161621"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.069522     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-161621"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: E1002 21:57:34.090532     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-161621\" already exists" pod="kube-system/kube-scheduler-newest-cni-161621"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.525386     729 apiserver.go:52] "Watching apiserver"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.546399     729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.582499     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7efc7ea2-18d3-49a1-a3eb-1c2767978396-cni-cfg\") pod \"kindnet-49wbb\" (UID: \"7efc7ea2-18d3-49a1-a3eb-1c2767978396\") " pod="kube-system/kindnet-49wbb"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.582544     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7efc7ea2-18d3-49a1-a3eb-1c2767978396-xtables-lock\") pod \"kindnet-49wbb\" (UID: \"7efc7ea2-18d3-49a1-a3eb-1c2767978396\") " pod="kube-system/kindnet-49wbb"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.582608     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7efc7ea2-18d3-49a1-a3eb-1c2767978396-lib-modules\") pod \"kindnet-49wbb\" (UID: \"7efc7ea2-18d3-49a1-a3eb-1c2767978396\") " pod="kube-system/kindnet-49wbb"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.582650     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7a0b577-70f7-44ba-8990-3694f9fcc965-xtables-lock\") pod \"kube-proxy-dgplp\" (UID: \"b7a0b577-70f7-44ba-8990-3694f9fcc965\") " pod="kube-system/kube-proxy-dgplp"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.582668     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7a0b577-70f7-44ba-8990-3694f9fcc965-lib-modules\") pod \"kube-proxy-dgplp\" (UID: \"b7a0b577-70f7-44ba-8990-3694f9fcc965\") " pod="kube-system/kube-proxy-dgplp"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: I1002 21:57:34.618711     729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 21:57:34 newest-cni-161621 kubelet[729]: W1002 21:57:34.876202     729 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4274608d314fbc5ead31d739526b17b5f37d5676a964de84d12427d22995cbad/crio-fdf2281f7d215f68aa2f8a8d4167c884e0bdb528daf30feddd8d30814ad395f2 WatchSource:0}: Error finding container fdf2281f7d215f68aa2f8a8d4167c884e0bdb528daf30feddd8d30814ad395f2: Status 404 returned error can't find the container with id fdf2281f7d215f68aa2f8a8d4167c884e0bdb528daf30feddd8d30814ad395f2
	Oct 02 21:57:36 newest-cni-161621 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 21:57:36 newest-cni-161621 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 21:57:36 newest-cni-161621 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-161621 -n newest-cni-161621
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-161621 -n newest-cni-161621: exit status 2 (406.474772ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-161621 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-ntw6h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xwlp9 kubernetes-dashboard-855c9754f9-426rg
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-161621 describe pod coredns-66bc5c9577-ntw6h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xwlp9 kubernetes-dashboard-855c9754f9-426rg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-161621 describe pod coredns-66bc5c9577-ntw6h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xwlp9 kubernetes-dashboard-855c9754f9-426rg: exit status 1 (121.046609ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-ntw6h" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-xwlp9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-426rg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-161621 describe pod coredns-66bc5c9577-ntw6h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-xwlp9 kubernetes-dashboard-855c9754f9-426rg: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-842185 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-842185 --alsologtostderr -v=1: exit status 80 (2.237922199s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-842185 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:58:08.084292 1214615 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:58:08.084414 1214615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:58:08.084418 1214615 out.go:374] Setting ErrFile to fd 2...
	I1002 21:58:08.084422 1214615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:58:08.084786 1214615 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:58:08.085070 1214615 out.go:368] Setting JSON to false
	I1002 21:58:08.085094 1214615 mustload.go:65] Loading cluster: default-k8s-diff-port-842185
	I1002 21:58:08.085781 1214615 config.go:182] Loaded profile config "default-k8s-diff-port-842185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:58:08.086573 1214615 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842185 --format={{.State.Status}}
	I1002 21:58:08.111326 1214615 host.go:66] Checking if "default-k8s-diff-port-842185" exists ...
	I1002 21:58:08.111671 1214615 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:58:08.205646 1214615 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-02 21:58:08.195672484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:58:08.206344 1214615 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-842185 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 21:58:08.209644 1214615 out.go:179] * Pausing node default-k8s-diff-port-842185 ... 
	I1002 21:58:08.213611 1214615 host.go:66] Checking if "default-k8s-diff-port-842185" exists ...
	I1002 21:58:08.214008 1214615 ssh_runner.go:195] Run: systemctl --version
	I1002 21:58:08.214085 1214615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842185
	I1002 21:58:08.235775 1214615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34221 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/default-k8s-diff-port-842185/id_rsa Username:docker}
	I1002 21:58:08.337805 1214615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:58:08.352434 1214615 pause.go:51] kubelet running: true
	I1002 21:58:08.352511 1214615 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:58:08.629837 1214615 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:58:08.629925 1214615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:58:08.731706 1214615 cri.go:89] found id: "64f0b68c2673bf46005af429475fc83db461da30e3d43a9a7126fb7ac6660eff"
	I1002 21:58:08.731729 1214615 cri.go:89] found id: "930f916f8aefd1566d89657d7223a999b3fb9270aa00ef267c7cb03f1708cb13"
	I1002 21:58:08.731734 1214615 cri.go:89] found id: "33c80e1a2f69cfb800549e3ae649e668646a0d83184ad0cf1e11f9d7f5043da4"
	I1002 21:58:08.731738 1214615 cri.go:89] found id: "279f49e646f24ae456db23c8dd2006c88affb19c3f9b2b81e77d75a954d34756"
	I1002 21:58:08.731742 1214615 cri.go:89] found id: "8463b0dcd75cd6f75372c417bc075beb8b056e1c586c005ab100861750cc9798"
	I1002 21:58:08.731746 1214615 cri.go:89] found id: "73e02330ac81eb5bb08428b429c32a892fcf1a19e76b8a5bcd1f6bca9df0bb91"
	I1002 21:58:08.731749 1214615 cri.go:89] found id: "bcc8f7a1359466ea9b86d847aaee545ff42e89546c49da9e5a234add61b6e16c"
	I1002 21:58:08.731753 1214615 cri.go:89] found id: "e26cd38eb38f190780ac20a4962aaad22e5e6f78b418d8570ba1272bc0e6fcd1"
	I1002 21:58:08.731756 1214615 cri.go:89] found id: "e2012db846f04fedc4a91921d9a9ee6083af8a50e9ac894b9c0ef5f131609e50"
	I1002 21:58:08.731764 1214615 cri.go:89] found id: "80a05ac4afd1a94da00decaa801cea03d7347aafe0ac5a6914f53e984de953b3"
	I1002 21:58:08.731768 1214615 cri.go:89] found id: "da783ac8f46fe44b10bd4efc52b5f34498ded24088c7777ecc3a07c3ce7bf0ea"
	I1002 21:58:08.731771 1214615 cri.go:89] found id: ""
	I1002 21:58:08.731821 1214615 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:58:08.745000 1214615 retry.go:31] will retry after 264.333542ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:58:08Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:58:09.010321 1214615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:58:09.027896 1214615 pause.go:51] kubelet running: false
	I1002 21:58:09.027970 1214615 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:58:09.227803 1214615 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:58:09.227903 1214615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:58:09.348664 1214615 cri.go:89] found id: "64f0b68c2673bf46005af429475fc83db461da30e3d43a9a7126fb7ac6660eff"
	I1002 21:58:09.348697 1214615 cri.go:89] found id: "930f916f8aefd1566d89657d7223a999b3fb9270aa00ef267c7cb03f1708cb13"
	I1002 21:58:09.348703 1214615 cri.go:89] found id: "33c80e1a2f69cfb800549e3ae649e668646a0d83184ad0cf1e11f9d7f5043da4"
	I1002 21:58:09.348706 1214615 cri.go:89] found id: "279f49e646f24ae456db23c8dd2006c88affb19c3f9b2b81e77d75a954d34756"
	I1002 21:58:09.348709 1214615 cri.go:89] found id: "8463b0dcd75cd6f75372c417bc075beb8b056e1c586c005ab100861750cc9798"
	I1002 21:58:09.348712 1214615 cri.go:89] found id: "73e02330ac81eb5bb08428b429c32a892fcf1a19e76b8a5bcd1f6bca9df0bb91"
	I1002 21:58:09.348716 1214615 cri.go:89] found id: "bcc8f7a1359466ea9b86d847aaee545ff42e89546c49da9e5a234add61b6e16c"
	I1002 21:58:09.348719 1214615 cri.go:89] found id: "e26cd38eb38f190780ac20a4962aaad22e5e6f78b418d8570ba1272bc0e6fcd1"
	I1002 21:58:09.348723 1214615 cri.go:89] found id: "e2012db846f04fedc4a91921d9a9ee6083af8a50e9ac894b9c0ef5f131609e50"
	I1002 21:58:09.348738 1214615 cri.go:89] found id: "80a05ac4afd1a94da00decaa801cea03d7347aafe0ac5a6914f53e984de953b3"
	I1002 21:58:09.348746 1214615 cri.go:89] found id: "da783ac8f46fe44b10bd4efc52b5f34498ded24088c7777ecc3a07c3ce7bf0ea"
	I1002 21:58:09.348750 1214615 cri.go:89] found id: ""
	I1002 21:58:09.348816 1214615 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:58:09.362166 1214615 retry.go:31] will retry after 517.79747ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:58:09Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:58:09.880891 1214615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:58:09.893850 1214615 pause.go:51] kubelet running: false
	I1002 21:58:09.893949 1214615 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 21:58:10.084471 1214615 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 21:58:10.084565 1214615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 21:58:10.205488 1214615 cri.go:89] found id: "64f0b68c2673bf46005af429475fc83db461da30e3d43a9a7126fb7ac6660eff"
	I1002 21:58:10.205514 1214615 cri.go:89] found id: "930f916f8aefd1566d89657d7223a999b3fb9270aa00ef267c7cb03f1708cb13"
	I1002 21:58:10.205520 1214615 cri.go:89] found id: "33c80e1a2f69cfb800549e3ae649e668646a0d83184ad0cf1e11f9d7f5043da4"
	I1002 21:58:10.205524 1214615 cri.go:89] found id: "279f49e646f24ae456db23c8dd2006c88affb19c3f9b2b81e77d75a954d34756"
	I1002 21:58:10.205528 1214615 cri.go:89] found id: "8463b0dcd75cd6f75372c417bc075beb8b056e1c586c005ab100861750cc9798"
	I1002 21:58:10.205531 1214615 cri.go:89] found id: "73e02330ac81eb5bb08428b429c32a892fcf1a19e76b8a5bcd1f6bca9df0bb91"
	I1002 21:58:10.205535 1214615 cri.go:89] found id: "bcc8f7a1359466ea9b86d847aaee545ff42e89546c49da9e5a234add61b6e16c"
	I1002 21:58:10.205539 1214615 cri.go:89] found id: "e26cd38eb38f190780ac20a4962aaad22e5e6f78b418d8570ba1272bc0e6fcd1"
	I1002 21:58:10.205542 1214615 cri.go:89] found id: "e2012db846f04fedc4a91921d9a9ee6083af8a50e9ac894b9c0ef5f131609e50"
	I1002 21:58:10.205561 1214615 cri.go:89] found id: "80a05ac4afd1a94da00decaa801cea03d7347aafe0ac5a6914f53e984de953b3"
	I1002 21:58:10.205567 1214615 cri.go:89] found id: "da783ac8f46fe44b10bd4efc52b5f34498ded24088c7777ecc3a07c3ce7bf0ea"
	I1002 21:58:10.205571 1214615 cri.go:89] found id: ""
	I1002 21:58:10.205619 1214615 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:58:10.235649 1214615 out.go:203] 
	W1002 21:58:10.239112 1214615 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:58:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:58:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:58:10.239259 1214615 out.go:285] * 
	* 
	W1002 21:58:10.247559 1214615 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:58:10.250705 1214615 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-842185 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-842185
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-842185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f",
	        "Created": "2025-10-02T21:55:02.411044691Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1206384,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:56:50.987306258Z",
	            "FinishedAt": "2025-10-02T21:56:49.93626773Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/hostname",
	        "HostsPath": "/var/lib/docker/containers/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/hosts",
	        "LogPath": "/var/lib/docker/containers/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f-json.log",
	        "Name": "/default-k8s-diff-port-842185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-842185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-842185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f",
	                "LowerDir": "/var/lib/docker/overlay2/40b28f1c7350f57a8441dbd43f58861ac9ba8246d909b8ff6eee8c63bf13ca6d-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/40b28f1c7350f57a8441dbd43f58861ac9ba8246d909b8ff6eee8c63bf13ca6d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/40b28f1c7350f57a8441dbd43f58861ac9ba8246d909b8ff6eee8c63bf13ca6d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/40b28f1c7350f57a8441dbd43f58861ac9ba8246d909b8ff6eee8c63bf13ca6d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-842185",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-842185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-842185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-842185",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-842185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4195ddf5c38e97ca3a617684671f692cffc86534ca694752e111f8e379e4ab5e",
	            "SandboxKey": "/var/run/docker/netns/4195ddf5c38e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34221"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34222"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34225"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34223"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34224"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-842185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:e2:de:69:1b:ed",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c75f49aff1de19eab04e162890223324556cf47bb7a7732a62f8c3500b677819",
	                    "EndpointID": "364b6568c0a17bc4504d15826917ea2d45360cf7b5d36aead789ddda7d3b2aeb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-842185",
	                        "724f09ef6992"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842185 -n default-k8s-diff-port-842185
E1002 21:58:10.710442  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:10.716723  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:10.728040  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:10.749369  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842185 -n default-k8s-diff-port-842185: exit status 2 (468.131904ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-842185 logs -n 25
E1002 21:58:10.791278  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:10.872547  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:11.034046  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:11.360130  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:12.002375  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-842185 logs -n 25: (2.000944439s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-661954                                                                                                                                                                                                                          │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ delete  │ -p disable-driver-mounts-013352                                                                                                                                                                                                               │ disable-driver-mounts-013352 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ start   │ -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-132977 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:55 UTC │
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:56 UTC │
	│ image   │ embed-certs-132977 image list --format=json                                                                                                                                                                                                   │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ pause   │ -p embed-certs-132977 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	│ delete  │ -p embed-certs-132977                                                                                                                                                                                                                         │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ delete  │ -p embed-certs-132977                                                                                                                                                                                                                         │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ start   │ -p newest-cni-161621 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-842185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-842185 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-842185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ start   │ -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:57 UTC │
	│ addons  │ enable metrics-server -p newest-cni-161621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │                     │
	│ stop    │ -p newest-cni-161621 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ addons  │ enable dashboard -p newest-cni-161621 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ start   │ -p newest-cni-161621 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ image   │ newest-cni-161621 image list --format=json                                                                                                                                                                                                    │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ pause   │ -p newest-cni-161621 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │                     │
	│ delete  │ -p newest-cni-161621                                                                                                                                                                                                                          │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ delete  │ -p newest-cni-161621                                                                                                                                                                                                                          │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ start   │ -p auto-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-644857                  │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │                     │
	│ image   │ default-k8s-diff-port-842185 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:58 UTC │
	│ pause   │ -p default-k8s-diff-port-842185 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:57:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:57:44.723831 1212570 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:57:44.723946 1212570 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:57:44.723951 1212570 out.go:374] Setting ErrFile to fd 2...
	I1002 21:57:44.723956 1212570 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:57:44.724222 1212570 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:57:44.724634 1212570 out.go:368] Setting JSON to false
	I1002 21:57:44.725531 1212570 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24002,"bootTime":1759418263,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:57:44.725611 1212570 start.go:140] virtualization:  
	I1002 21:57:44.729567 1212570 out.go:179] * [auto-644857] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:57:44.733789 1212570 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:57:44.733827 1212570 notify.go:221] Checking for updates...
	I1002 21:57:44.740180 1212570 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:57:44.743245 1212570 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:57:44.746209 1212570 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:57:44.749147 1212570 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:57:44.752106 1212570 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:57:44.755591 1212570 config.go:182] Loaded profile config "default-k8s-diff-port-842185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:57:44.755704 1212570 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:57:44.779174 1212570 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:57:44.779307 1212570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:57:44.844882 1212570 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:57:44.835219018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:57:44.844988 1212570 docker.go:319] overlay module found
	I1002 21:57:44.848228 1212570 out.go:179] * Using the docker driver based on user configuration
	I1002 21:57:44.851179 1212570 start.go:306] selected driver: docker
	I1002 21:57:44.851205 1212570 start.go:936] validating driver "docker" against <nil>
	I1002 21:57:44.851217 1212570 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:57:44.851961 1212570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:57:44.916901 1212570 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:57:44.907923324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:57:44.917073 1212570 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:57:44.917318 1212570 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:57:44.920361 1212570 out.go:179] * Using Docker driver with root privileges
	I1002 21:57:44.923287 1212570 cni.go:84] Creating CNI manager for ""
	I1002 21:57:44.923364 1212570 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:57:44.923378 1212570 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:57:44.923456 1212570 start.go:350] cluster config:
	{Name:auto-644857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-644857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1002 21:57:44.927625 1212570 out.go:179] * Starting "auto-644857" primary control-plane node in "auto-644857" cluster
	I1002 21:57:44.930550 1212570 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:57:44.933508 1212570 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:57:44.936405 1212570 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:57:44.936462 1212570 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:57:44.936477 1212570 cache.go:59] Caching tarball of preloaded images
	I1002 21:57:44.936523 1212570 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:57:44.936565 1212570 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:57:44.936575 1212570 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:57:44.936685 1212570 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/config.json ...
	I1002 21:57:44.936702 1212570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/config.json: {Name:mka62e1620ce851af9f8719107917da2c74da7b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:44.955927 1212570 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:57:44.955950 1212570 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:57:44.955964 1212570 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:57:44.955986 1212570 start.go:361] acquireMachinesLock for auto-644857: {Name:mk23f4e52b42150aa8165a99b1d727a3022ec133 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:57:44.956096 1212570 start.go:365] duration metric: took 88.777µs to acquireMachinesLock for "auto-644857"
	I1002 21:57:44.956137 1212570 start.go:94] Provisioning new machine with config: &{Name:auto-644857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-644857 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:57:44.956208 1212570 start.go:126] createHost starting for "" (driver="docker")
	W1002 21:57:40.664617 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	W1002 21:57:42.666684 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	W1002 21:57:45.166468 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	I1002 21:57:44.959628 1212570 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:57:44.959847 1212570 start.go:160] libmachine.API.Create for "auto-644857" (driver="docker")
	I1002 21:57:44.959889 1212570 client.go:168] LocalClient.Create starting
	I1002 21:57:44.959957 1212570 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem
	I1002 21:57:44.959998 1212570 main.go:141] libmachine: Decoding PEM data...
	I1002 21:57:44.960016 1212570 main.go:141] libmachine: Parsing certificate...
	I1002 21:57:44.960072 1212570 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem
	I1002 21:57:44.960093 1212570 main.go:141] libmachine: Decoding PEM data...
	I1002 21:57:44.960106 1212570 main.go:141] libmachine: Parsing certificate...
	I1002 21:57:44.960456 1212570 cli_runner.go:164] Run: docker network inspect auto-644857 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:57:44.976638 1212570 cli_runner.go:211] docker network inspect auto-644857 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:57:44.976720 1212570 network_create.go:284] running [docker network inspect auto-644857] to gather additional debugging logs...
	I1002 21:57:44.976752 1212570 cli_runner.go:164] Run: docker network inspect auto-644857
	W1002 21:57:44.991632 1212570 cli_runner.go:211] docker network inspect auto-644857 returned with exit code 1
	I1002 21:57:44.991673 1212570 network_create.go:287] error running [docker network inspect auto-644857]: docker network inspect auto-644857: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-644857 not found
	I1002 21:57:44.991686 1212570 network_create.go:289] output of [docker network inspect auto-644857]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-644857 not found
	
	** /stderr **
	I1002 21:57:44.991790 1212570 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:57:45.030261 1212570 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c06a83b4618b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:65:5b:88:54:7f} reservation:<nil>}
	I1002 21:57:45.031176 1212570 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-478a83a9ba8a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:87:10:56:a0:1c} reservation:<nil>}
	I1002 21:57:45.032436 1212570 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb579a1208f3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:a8:08:2c:81:8c} reservation:<nil>}
	I1002 21:57:45.035181 1212570 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 21:57:45.035742 1212570 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-c75f49aff1de IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:4e:75:a2:8f:3e:b6} reservation:<nil>}
	I1002 21:57:45.040289 1212570 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 21:57:45.041050 1212570 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a517f0}
	I1002 21:57:45.041084 1212570 network_create.go:124] attempt to create docker network auto-644857 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1002 21:57:45.041161 1212570 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-644857 auto-644857
	I1002 21:57:45.188097 1212570 network_create.go:108] docker network auto-644857 192.168.103.0/24 created
	I1002 21:57:45.188136 1212570 kic.go:121] calculated static IP "192.168.103.2" for the "auto-644857" container
	I1002 21:57:45.188245 1212570 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:57:45.207181 1212570 cli_runner.go:164] Run: docker volume create auto-644857 --label name.minikube.sigs.k8s.io=auto-644857 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:57:45.237685 1212570 oci.go:103] Successfully created a docker volume auto-644857
	I1002 21:57:45.237809 1212570 cli_runner.go:164] Run: docker run --rm --name auto-644857-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-644857 --entrypoint /usr/bin/test -v auto-644857:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:57:45.897963 1212570 oci.go:107] Successfully prepared a docker volume auto-644857
	I1002 21:57:45.898016 1212570 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:57:45.898069 1212570 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:57:45.898152 1212570 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-644857:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	W1002 21:57:47.664293 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	W1002 21:57:50.166526 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	I1002 21:57:50.197258 1212570 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-644857:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.299063277s)
	I1002 21:57:50.197294 1212570 kic.go:203] duration metric: took 4.29922177s to extract preloaded images to volume ...
	W1002 21:57:50.197438 1212570 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:57:50.197561 1212570 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:57:50.252980 1212570 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-644857 --name auto-644857 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-644857 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-644857 --network auto-644857 --ip 192.168.103.2 --volume auto-644857:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:57:50.555831 1212570 cli_runner.go:164] Run: docker container inspect auto-644857 --format={{.State.Running}}
	I1002 21:57:50.574882 1212570 cli_runner.go:164] Run: docker container inspect auto-644857 --format={{.State.Status}}
	I1002 21:57:50.600121 1212570 cli_runner.go:164] Run: docker exec auto-644857 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:57:50.652152 1212570 oci.go:144] the created container "auto-644857" has a running status.
	I1002 21:57:50.652179 1212570 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/auto-644857/id_rsa...
	I1002 21:57:51.612376 1212570 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-992084/.minikube/machines/auto-644857/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:57:51.632137 1212570 cli_runner.go:164] Run: docker container inspect auto-644857 --format={{.State.Status}}
	I1002 21:57:51.648636 1212570 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:57:51.648661 1212570 kic_runner.go:114] Args: [docker exec --privileged auto-644857 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:57:51.692634 1212570 cli_runner.go:164] Run: docker container inspect auto-644857 --format={{.State.Status}}
	I1002 21:57:51.731595 1212570 machine.go:93] provisionDockerMachine start ...
	I1002 21:57:51.731703 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:51.760880 1212570 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:51.761209 1212570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34231 <nil> <nil>}
	I1002 21:57:51.761219 1212570 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:57:51.761830 1212570 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33330->127.0.0.1:34231: read: connection reset by peer
	W1002 21:57:52.665400 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	I1002 21:57:53.664473 1206253 pod_ready.go:94] pod "coredns-66bc5c9577-5hq6c" is "Ready"
	I1002 21:57:53.664503 1206253 pod_ready.go:86] duration metric: took 40.505701955s for pod "coredns-66bc5c9577-5hq6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:53.667201 1206253 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:53.671524 1206253 pod_ready.go:94] pod "etcd-default-k8s-diff-port-842185" is "Ready"
	I1002 21:57:53.671553 1206253 pod_ready.go:86] duration metric: took 4.326787ms for pod "etcd-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:53.673704 1206253 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:53.677772 1206253 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-842185" is "Ready"
	I1002 21:57:53.677802 1206253 pod_ready.go:86] duration metric: took 4.06853ms for pod "kube-apiserver-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:53.680188 1206253 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:53.862666 1206253 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-842185" is "Ready"
	I1002 21:57:53.862696 1206253 pod_ready.go:86] duration metric: took 182.479893ms for pod "kube-controller-manager-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:54.062823 1206253 pod_ready.go:83] waiting for pod "kube-proxy-vhggd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:54.462767 1206253 pod_ready.go:94] pod "kube-proxy-vhggd" is "Ready"
	I1002 21:57:54.462791 1206253 pod_ready.go:86] duration metric: took 399.941226ms for pod "kube-proxy-vhggd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:54.662996 1206253 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:55.067378 1206253 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-842185" is "Ready"
	I1002 21:57:55.067405 1206253 pod_ready.go:86] duration metric: took 404.380478ms for pod "kube-scheduler-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:55.067418 1206253 pod_ready.go:40] duration metric: took 41.91277134s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:57:55.150795 1206253 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:57:55.154275 1206253 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-842185" cluster and "default" namespace by default
	I1002 21:57:54.893720 1212570 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-644857
	
	I1002 21:57:54.893745 1212570 ubuntu.go:182] provisioning hostname "auto-644857"
	I1002 21:57:54.893835 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:54.911994 1212570 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:54.912312 1212570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34231 <nil> <nil>}
	I1002 21:57:54.912334 1212570 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-644857 && echo "auto-644857" | sudo tee /etc/hostname
	I1002 21:57:55.055809 1212570 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-644857
	
	I1002 21:57:55.055953 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:55.080236 1212570 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:55.080563 1212570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34231 <nil> <nil>}
	I1002 21:57:55.080587 1212570 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-644857' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-644857/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-644857' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:57:55.254229 1212570 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:57:55.254257 1212570 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:57:55.254285 1212570 ubuntu.go:190] setting up certificates
	I1002 21:57:55.254295 1212570 provision.go:84] configureAuth start
	I1002 21:57:55.254354 1212570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-644857
	I1002 21:57:55.289698 1212570 provision.go:143] copyHostCerts
	I1002 21:57:55.289763 1212570 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:57:55.289778 1212570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:57:55.289856 1212570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:57:55.289971 1212570 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:57:55.289983 1212570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:57:55.290014 1212570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:57:55.290195 1212570 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:57:55.290207 1212570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:57:55.290241 1212570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:57:55.290297 1212570 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.auto-644857 san=[127.0.0.1 192.168.103.2 auto-644857 localhost minikube]
	I1002 21:57:55.693419 1212570 provision.go:177] copyRemoteCerts
	I1002 21:57:55.693512 1212570 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:57:55.693569 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:55.724287 1212570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34231 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/auto-644857/id_rsa Username:docker}
	I1002 21:57:55.822164 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:57:55.839851 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1002 21:57:55.856702 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:57:55.879553 1212570 provision.go:87] duration metric: took 625.233459ms to configureAuth
	I1002 21:57:55.879629 1212570 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:57:55.879839 1212570 config.go:182] Loaded profile config "auto-644857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:57:55.879954 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:55.898086 1212570 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:55.898413 1212570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34231 <nil> <nil>}
	I1002 21:57:55.898440 1212570 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:57:56.162603 1212570 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:57:56.162629 1212570 machine.go:96] duration metric: took 4.431015314s to provisionDockerMachine
	I1002 21:57:56.162639 1212570 client.go:171] duration metric: took 11.202738628s to LocalClient.Create
	I1002 21:57:56.162674 1212570 start.go:168] duration metric: took 11.202808304s to libmachine.API.Create "auto-644857"
	I1002 21:57:56.162687 1212570 start.go:294] postStartSetup for "auto-644857" (driver="docker")
	I1002 21:57:56.162697 1212570 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:57:56.162760 1212570 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:57:56.162812 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:56.181173 1212570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34231 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/auto-644857/id_rsa Username:docker}
	I1002 21:57:56.278395 1212570 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:57:56.281765 1212570 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:57:56.281795 1212570 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:57:56.281806 1212570 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:57:56.281856 1212570 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:57:56.281935 1212570 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:57:56.282054 1212570 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:57:56.289412 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:57:56.307011 1212570 start.go:297] duration metric: took 144.307628ms for postStartSetup
	I1002 21:57:56.307376 1212570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-644857
	I1002 21:57:56.324415 1212570 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/config.json ...
	I1002 21:57:56.324694 1212570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:57:56.324741 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:56.343476 1212570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34231 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/auto-644857/id_rsa Username:docker}
	I1002 21:57:56.438938 1212570 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:57:56.443701 1212570 start.go:129] duration metric: took 11.48747839s to createHost
	I1002 21:57:56.443723 1212570 start.go:84] releasing machines lock for "auto-644857", held for 11.487613182s
	I1002 21:57:56.443792 1212570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-644857
	I1002 21:57:56.461385 1212570 ssh_runner.go:195] Run: cat /version.json
	I1002 21:57:56.461434 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:56.461714 1212570 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:57:56.461779 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:56.481093 1212570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34231 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/auto-644857/id_rsa Username:docker}
	I1002 21:57:56.495738 1212570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34231 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/auto-644857/id_rsa Username:docker}
	I1002 21:57:56.680114 1212570 ssh_runner.go:195] Run: systemctl --version
	I1002 21:57:56.687465 1212570 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:57:56.735764 1212570 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:57:56.740128 1212570 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:57:56.740235 1212570 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:57:56.772382 1212570 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 21:57:56.772406 1212570 start.go:496] detecting cgroup driver to use...
	I1002 21:57:56.772459 1212570 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:57:56.772515 1212570 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:57:56.790212 1212570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:57:56.803090 1212570 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:57:56.803180 1212570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:57:56.821045 1212570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:57:56.841043 1212570 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:57:56.966269 1212570 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:57:57.104632 1212570 docker.go:234] disabling docker service ...
	I1002 21:57:57.104751 1212570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:57:57.133126 1212570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:57:57.147152 1212570 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:57:57.272586 1212570 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:57:57.410196 1212570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:57:57.424353 1212570 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:57:57.441188 1212570 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:57:57.441281 1212570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:57.450652 1212570 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:57:57.450726 1212570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:57.460769 1212570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:57.469862 1212570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:57.479142 1212570 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:57:57.486858 1212570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:57.495437 1212570 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:57.509162 1212570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:57.518366 1212570 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:57:57.526439 1212570 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:57:57.533423 1212570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:57:57.657900 1212570 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:57:57.796290 1212570 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:57:57.796365 1212570 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:57:57.800566 1212570 start.go:564] Will wait 60s for crictl version
	I1002 21:57:57.800681 1212570 ssh_runner.go:195] Run: which crictl
	I1002 21:57:57.804488 1212570 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:57:57.830398 1212570 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:57:57.830508 1212570 ssh_runner.go:195] Run: crio --version
	I1002 21:57:57.859174 1212570 ssh_runner.go:195] Run: crio --version
	I1002 21:57:57.896744 1212570 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:57:57.899611 1212570 cli_runner.go:164] Run: docker network inspect auto-644857 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:57:57.916047 1212570 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1002 21:57:57.919729 1212570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:57:57.929465 1212570 kubeadm.go:883] updating cluster {Name:auto-644857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-644857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:57:57.929585 1212570 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:57:57.929640 1212570 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:57:57.963445 1212570 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:57:57.963481 1212570 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:57:57.963540 1212570 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:57:57.991050 1212570 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:57:57.991075 1212570 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:57:57.991083 1212570 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1002 21:57:57.991175 1212570 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-644857 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-644857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:57:57.991261 1212570 ssh_runner.go:195] Run: crio config
	I1002 21:57:58.066825 1212570 cni.go:84] Creating CNI manager for ""
	I1002 21:57:58.066850 1212570 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:57:58.066865 1212570 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:57:58.066906 1212570 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-644857 NodeName:auto-644857 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:57:58.067061 1212570 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-644857"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:57:58.067154 1212570 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:57:58.075669 1212570 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:57:58.075766 1212570 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:57:58.083600 1212570 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1002 21:57:58.097521 1212570 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:57:58.110659 1212570 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1002 21:57:58.124037 1212570 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:57:58.127805 1212570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:57:58.137818 1212570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:57:58.257031 1212570 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:57:58.273159 1212570 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857 for IP: 192.168.103.2
	I1002 21:57:58.273242 1212570 certs.go:195] generating shared ca certs ...
	I1002 21:57:58.273274 1212570 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:58.273469 1212570 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:57:58.273550 1212570 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:57:58.273585 1212570 certs.go:257] generating profile certs ...
	I1002 21:57:58.273664 1212570 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.key
	I1002 21:57:58.273712 1212570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt with IP's: []
	I1002 21:57:58.675031 1212570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt ...
	I1002 21:57:58.675063 1212570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt: {Name:mkea4edef2453de54aa0ca1560115255cdac1127 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:58.675272 1212570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.key ...
	I1002 21:57:58.675285 1212570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.key: {Name:mk0c46468e39fbfee43da4d3902411d6edc46b69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:58.675388 1212570 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.key.5906b6e8
	I1002 21:57:58.675409 1212570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.crt.5906b6e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1002 21:57:59.111375 1212570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.crt.5906b6e8 ...
	I1002 21:57:59.111405 1212570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.crt.5906b6e8: {Name:mk1cb990fd7550717cca2ff726307498c7c36578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:59.111600 1212570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.key.5906b6e8 ...
	I1002 21:57:59.111617 1212570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.key.5906b6e8: {Name:mk29c2e47081200f79809af00e08e4ffe78f9477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:59.111706 1212570 certs.go:382] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.crt.5906b6e8 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.crt
	I1002 21:57:59.111792 1212570 certs.go:386] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.key.5906b6e8 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.key
	I1002 21:57:59.111882 1212570 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.key
	I1002 21:57:59.111900 1212570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.crt with IP's: []
	I1002 21:58:00.033811 1212570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.crt ...
	I1002 21:58:00.033847 1212570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.crt: {Name:mk6303dd6d6aefa91fb1d78cde023fb7c0821c54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:58:00.034139 1212570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.key ...
	I1002 21:58:00.034151 1212570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.key: {Name:mk7a713156d5bc785fdcdaf6b47a2db89787d30d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:58:00.034425 1212570 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:58:00.034467 1212570 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:58:00.034477 1212570 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:58:00.034504 1212570 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:58:00.034531 1212570 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:58:00.034558 1212570 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:58:00.034611 1212570 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:58:00.035295 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:58:00.160723 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:58:00.254089 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:58:00.311165 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:58:00.363072 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 21:58:00.400280 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:58:00.424960 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:58:00.450168 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 21:58:00.481242 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:58:00.507775 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:58:00.538216 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:58:00.561351 1212570 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:58:00.578402 1212570 ssh_runner.go:195] Run: openssl version
	I1002 21:58:00.587975 1212570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:58:00.597505 1212570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:58:00.601793 1212570 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:58:00.601862 1212570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:58:00.645375 1212570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:58:00.653989 1212570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:58:00.662618 1212570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:58:00.666579 1212570 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:58:00.666649 1212570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:58:00.714653 1212570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:58:00.724226 1212570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:58:00.733023 1212570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:58:00.736723 1212570 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:58:00.736798 1212570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:58:00.778304 1212570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:58:00.786764 1212570 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:58:00.790288 1212570 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:58:00.790341 1212570 kubeadm.go:400] StartCluster: {Name:auto-644857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-644857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:58:00.790415 1212570 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:58:00.790476 1212570 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:58:00.817121 1212570 cri.go:89] found id: ""
	I1002 21:58:00.817270 1212570 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:58:00.824983 1212570 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:58:00.832671 1212570 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:58:00.832742 1212570 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:58:00.840996 1212570 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:58:00.841015 1212570 kubeadm.go:157] found existing configuration files:
	
	I1002 21:58:00.841066 1212570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:58:00.849076 1212570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:58:00.849139 1212570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:58:00.856817 1212570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:58:00.864983 1212570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:58:00.865050 1212570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:58:00.872574 1212570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:58:00.880651 1212570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:58:00.880732 1212570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:58:00.888463 1212570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:58:00.896198 1212570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:58:00.896261 1212570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:58:00.903664 1212570 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:58:00.942305 1212570 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:58:00.942618 1212570 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:58:00.965625 1212570 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:58:00.965705 1212570 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:58:00.965747 1212570 kubeadm.go:318] OS: Linux
	I1002 21:58:00.965802 1212570 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:58:00.965858 1212570 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:58:00.965926 1212570 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:58:00.965981 1212570 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:58:00.966060 1212570 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:58:00.966117 1212570 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:58:00.966169 1212570 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:58:00.966223 1212570 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:58:00.966274 1212570 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:58:01.033221 1212570 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:58:01.033357 1212570 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:58:01.033466 1212570 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:58:01.046483 1212570 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:58:01.053924 1212570 out.go:252]   - Generating certificates and keys ...
	I1002 21:58:01.054177 1212570 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:58:01.054282 1212570 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:58:02.681568 1212570 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:58:03.484813 1212570 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:58:03.582389 1212570 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:58:03.893208 1212570 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:58:04.214831 1212570 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:58:04.215037 1212570 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-644857 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1002 21:58:04.682864 1212570 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:58:04.683232 1212570 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-644857 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1002 21:58:05.462677 1212570 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:58:07.217106 1212570 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:58:07.287366 1212570 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:58:07.287669 1212570 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:58:07.433155 1212570 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:58:07.797943 1212570 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:58:08.261698 1212570 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:58:08.309832 1212570 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:58:08.486617 1212570 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:58:08.490411 1212570 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:58:08.490505 1212570 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:58:08.494162 1212570 out.go:252]   - Booting up control plane ...
	I1002 21:58:08.494298 1212570 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:58:08.494412 1212570 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:58:08.494494 1212570 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:58:08.514305 1212570 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:58:08.514418 1212570 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:58:08.530487 1212570 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:58:08.530590 1212570 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:58:08.530633 1212570 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:58:08.691452 1212570 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:58:08.691595 1212570 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:58:09.692786 1212570 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001408756s
	I1002 21:58:09.696778 1212570 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:58:09.696882 1212570 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1002 21:58:09.697177 1212570 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:58:09.697268 1212570 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.304261389Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6a0f1f64-734e-43b2-b3eb-9a6083b62677 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.308797155Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f3d1487f-51b0-448d-87dc-1544e499f6d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.309085082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.32034365Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.321197937Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2ad01c404d574b3f14660b5671b4019833b26f5c8eda22aa3213b68143b9cd91/merged/etc/passwd: no such file or directory"
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.321445807Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2ad01c404d574b3f14660b5671b4019833b26f5c8eda22aa3213b68143b9cd91/merged/etc/group: no such file or directory"
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.321969937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.347348298Z" level=info msg="Created container 64f0b68c2673bf46005af429475fc83db461da30e3d43a9a7126fb7ac6660eff: kube-system/storage-provisioner/storage-provisioner" id=f3d1487f-51b0-448d-87dc-1544e499f6d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.349374526Z" level=info msg="Starting container: 64f0b68c2673bf46005af429475fc83db461da30e3d43a9a7126fb7ac6660eff" id=0984bb45-bfdf-460a-afcf-a1d4ff9fb39d name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.354117737Z" level=info msg="Started container" PID=1636 containerID=64f0b68c2673bf46005af429475fc83db461da30e3d43a9a7126fb7ac6660eff description=kube-system/storage-provisioner/storage-provisioner id=0984bb45-bfdf-460a-afcf-a1d4ff9fb39d name=/runtime.v1.RuntimeService/StartContainer sandboxID=afe6e7f77d958110e6f8f0d523c52f7347197ae6bf33c52bedf367df339b0f75
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.010414247Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.019417834Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.019578839Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.019662537Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.025039873Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.02521536Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.025317051Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.035785035Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.035825625Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.035850001Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.042191554Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.042231183Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.042255642Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.047165905Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.047356637Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	64f0b68c2673b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           29 seconds ago       Running             storage-provisioner         2                   afe6e7f77d958       storage-provisioner                                    kube-system
	80a05ac4afd1a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago       Exited              dashboard-metrics-scraper   2                   66ee016490c6a       dashboard-metrics-scraper-6ffb444bf9-zxs4q             kubernetes-dashboard
	da783ac8f46fe       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   afe9d586bf532       kubernetes-dashboard-855c9754f9-qfnfg                  kubernetes-dashboard
	a70dec87e9599       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   fc5f162800afb       busybox                                                default
	930f916f8aefd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   e8b9c2da2fba2       kindnet-qb4vm                                          kube-system
	33c80e1a2f69c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   1896666969f6c       coredns-66bc5c9577-5hq6c                               kube-system
	279f49e646f24       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   afe6e7f77d958       storage-provisioner                                    kube-system
	8463b0dcd75cd       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   9dcb281719246       kube-proxy-vhggd                                       kube-system
	73e02330ac81e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   1e78517030124       kube-scheduler-default-k8s-diff-port-842185            kube-system
	bcc8f7a135946       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   a1f342a8f27ad       kube-controller-manager-default-k8s-diff-port-842185   kube-system
	e26cd38eb38f1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   1bdc760ce9420       kube-apiserver-default-k8s-diff-port-842185            kube-system
	e2012db846f04       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   5a72d5f08b53a       etcd-default-k8s-diff-port-842185                      kube-system
	
	
	==> coredns [33c80e1a2f69cfb800549e3ae649e668646a0d83184ad0cf1e11f9d7f5043da4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58695 - 56081 "HINFO IN 5251318617325741392.7856113883525916999. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022590159s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-842185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-842185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=default-k8s-diff-port-842185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_55_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:55:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-842185
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:58:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:58:01 +0000   Thu, 02 Oct 2025 21:55:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:58:01 +0000   Thu, 02 Oct 2025 21:55:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:58:01 +0000   Thu, 02 Oct 2025 21:55:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:58:01 +0000   Thu, 02 Oct 2025 21:56:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-842185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 8819512bab6f4089b6ec00d17025ca74
	  System UUID:                aa48841e-0403-43a7-8420-f3cab19a557a
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 coredns-66bc5c9577-5hq6c                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m32s
	  kube-system                 etcd-default-k8s-diff-port-842185                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m37s
	  kube-system                 kindnet-qb4vm                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m32s
	  kube-system                 kube-apiserver-default-k8s-diff-port-842185             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m39s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-842185    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kube-proxy-vhggd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-scheduler-default-k8s-diff-port-842185             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zxs4q              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qfnfg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m29s                  kube-proxy       
	  Normal   Starting                 59s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m49s (x8 over 2m49s)  kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x8 over 2m49s)  kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x8 over 2m49s)  kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m37s                  kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m37s                  kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m37s                  kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m37s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m33s                  node-controller  Node default-k8s-diff-port-842185 event: Registered Node default-k8s-diff-port-842185 in Controller
	  Normal   NodeReady                110s                   kubelet          Node default-k8s-diff-port-842185 status is now: NodeReady
	  Normal   Starting                 73s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 73s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 73s)      kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 73s)      kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 73s)      kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node default-k8s-diff-port-842185 event: Registered Node default-k8s-diff-port-842185 in Controller
	
	
	==> dmesg <==
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +39.734560] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[  +6.128952] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[  +5.098616] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:55] overlayfs: idmapped layers are currently not supported
	[  +9.226554] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[ +27.661855] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e2012db846f04fedc4a91921d9a9ee6083af8a50e9ac894b9c0ef5f131609e50] <==
	{"level":"warn","ts":"2025-10-02T21:57:08.740688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.760046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.785874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.818529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.845565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.862291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.876095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.896688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.918003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.000723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.057218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.091667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.109025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.132014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.148273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.163639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.181760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.227318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.249843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.264741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.304280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.311382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.336556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.351206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.406688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38934","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:58:12 up  6:40,  0 user,  load average: 4.98, 4.15, 2.70
	Linux default-k8s-diff-port-842185 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [930f916f8aefd1566d89657d7223a999b3fb9270aa00ef267c7cb03f1708cb13] <==
	I1002 21:57:11.811580       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:57:11.812641       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 21:57:11.812779       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:57:11.812798       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:57:11.812810       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:57:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:57:12.011161       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:57:12.011184       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:57:12.011195       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:57:12.011307       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:57:42.011382       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 21:57:42.011475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:57:42.011661       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 21:57:42.014262       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 21:57:43.611377       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:57:43.611478       1 metrics.go:72] Registering metrics
	I1002 21:57:43.611621       1 controller.go:711] "Syncing nftables rules"
	I1002 21:57:52.010146       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:57:52.010193       1 main.go:301] handling current node
	I1002 21:58:02.009899       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:58:02.009948       1 main.go:301] handling current node
	I1002 21:58:12.018428       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:58:12.018463       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e26cd38eb38f190780ac20a4962aaad22e5e6f78b418d8570ba1272bc0e6fcd1] <==
	I1002 21:57:10.567194       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 21:57:10.567232       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 21:57:10.567408       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 21:57:10.567455       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 21:57:10.577796       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 21:57:10.582485       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:57:10.584770       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 21:57:10.615120       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 21:57:10.615150       1 policy_source.go:240] refreshing policies
	I1002 21:57:10.622148       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1002 21:57:10.627621       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 21:57:10.654350       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:57:10.655587       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 21:57:10.702146       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:57:10.899454       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:57:11.269207       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:57:12.188992       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 21:57:12.367048       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:57:12.471465       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:57:12.511158       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:57:12.747526       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.17.134"}
	I1002 21:57:12.871173       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.178.55"}
	I1002 21:57:15.047212       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 21:57:15.240187       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 21:57:15.289904       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bcc8f7a1359466ea9b86d847aaee545ff42e89546c49da9e5a234add61b6e16c] <==
	I1002 21:57:14.881156       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 21:57:14.882585       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 21:57:14.882905       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:57:14.882973       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 21:57:14.883075       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 21:57:14.885772       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 21:57:14.889482       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 21:57:14.889590       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 21:57:14.889614       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 21:57:14.889619       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 21:57:14.889625       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 21:57:14.898317       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:57:14.898413       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:57:14.898444       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:57:14.901155       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 21:57:14.914239       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:57:14.914294       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 21:57:14.925074       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:57:14.927507       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 21:57:14.930120       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 21:57:14.935420       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 21:57:14.935510       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 21:57:14.935552       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 21:57:14.935435       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 21:57:14.940871       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [8463b0dcd75cd6f75372c417bc075beb8b056e1c586c005ab100861750cc9798] <==
	I1002 21:57:11.922956       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:57:12.203828       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:57:12.507049       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:57:12.516962       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 21:57:12.517137       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:57:12.955945       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:57:12.956063       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:57:12.962890       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:57:12.963243       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:57:12.963386       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:57:12.964598       1 config.go:200] "Starting service config controller"
	I1002 21:57:12.964658       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:57:12.964701       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:57:12.964728       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:57:12.964762       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:57:12.964789       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:57:12.965416       1 config.go:309] "Starting node config controller"
	I1002 21:57:12.970885       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:57:12.970984       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:57:13.064832       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 21:57:13.064940       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:57:13.064966       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [73e02330ac81eb5bb08428b429c32a892fcf1a19e76b8a5bcd1f6bca9df0bb91] <==
	I1002 21:57:07.605428       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:57:10.704391       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:57:10.704422       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:57:10.749258       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 21:57:10.749317       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 21:57:10.749371       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:57:10.749381       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:57:10.749407       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:57:10.749415       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:57:10.758390       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:57:10.758467       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:57:10.855512       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 21:57:10.855665       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:57:10.855465       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 02 21:57:15 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:15.561589     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhbcg\" (UniqueName: \"kubernetes.io/projected/eb119278-7163-4ac0-b60d-0a6b58e3192a-kube-api-access-jhbcg\") pod \"dashboard-metrics-scraper-6ffb444bf9-zxs4q\" (UID: \"eb119278-7163-4ac0-b60d-0a6b58e3192a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q"
	Oct 02 21:57:15 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:15.561623     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eb119278-7163-4ac0-b60d-0a6b58e3192a-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-zxs4q\" (UID: \"eb119278-7163-4ac0-b60d-0a6b58e3192a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q"
	Oct 02 21:57:15 default-k8s-diff-port-842185 kubelet[774]: W1002 21:57:15.785047     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/crio-66ee016490c6ae26153eaf3367a45c8696c55c4f0080ea46feb67bfc983ff7f9 WatchSource:0}: Error finding container 66ee016490c6ae26153eaf3367a45c8696c55c4f0080ea46feb67bfc983ff7f9: Status 404 returned error can't find the container with id 66ee016490c6ae26153eaf3367a45c8696c55c4f0080ea46feb67bfc983ff7f9
	Oct 02 21:57:15 default-k8s-diff-port-842185 kubelet[774]: W1002 21:57:15.811356     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/crio-afe9d586bf532dcf9e707ca215de7954e05ef56fbd84d8922b7c403badb523df WatchSource:0}: Error finding container afe9d586bf532dcf9e707ca215de7954e05ef56fbd84d8922b7c403badb523df: Status 404 returned error can't find the container with id afe9d586bf532dcf9e707ca215de7954e05ef56fbd84d8922b7c403badb523df
	Oct 02 21:57:22 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:22.228429     774 scope.go:117] "RemoveContainer" containerID="df1752540859e240340bf9726d6c278bc10bc51afa5f38349f54b935ce913a77"
	Oct 02 21:57:23 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:23.233526     774 scope.go:117] "RemoveContainer" containerID="df1752540859e240340bf9726d6c278bc10bc51afa5f38349f54b935ce913a77"
	Oct 02 21:57:23 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:23.234071     774 scope.go:117] "RemoveContainer" containerID="c1c58181c4c3504ea749bb8aea83f12705ebb3efcb2a368c634e8fd94b1ae91c"
	Oct 02 21:57:23 default-k8s-diff-port-842185 kubelet[774]: E1002 21:57:23.234266     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zxs4q_kubernetes-dashboard(eb119278-7163-4ac0-b60d-0a6b58e3192a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q" podUID="eb119278-7163-4ac0-b60d-0a6b58e3192a"
	Oct 02 21:57:24 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:24.241900     774 scope.go:117] "RemoveContainer" containerID="c1c58181c4c3504ea749bb8aea83f12705ebb3efcb2a368c634e8fd94b1ae91c"
	Oct 02 21:57:24 default-k8s-diff-port-842185 kubelet[774]: E1002 21:57:24.242083     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zxs4q_kubernetes-dashboard(eb119278-7163-4ac0-b60d-0a6b58e3192a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q" podUID="eb119278-7163-4ac0-b60d-0a6b58e3192a"
	Oct 02 21:57:25 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:25.753002     774 scope.go:117] "RemoveContainer" containerID="c1c58181c4c3504ea749bb8aea83f12705ebb3efcb2a368c634e8fd94b1ae91c"
	Oct 02 21:57:25 default-k8s-diff-port-842185 kubelet[774]: E1002 21:57:25.753207     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zxs4q_kubernetes-dashboard(eb119278-7163-4ac0-b60d-0a6b58e3192a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q" podUID="eb119278-7163-4ac0-b60d-0a6b58e3192a"
	Oct 02 21:57:39 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:39.982618     774 scope.go:117] "RemoveContainer" containerID="c1c58181c4c3504ea749bb8aea83f12705ebb3efcb2a368c634e8fd94b1ae91c"
	Oct 02 21:57:40 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:40.288355     774 scope.go:117] "RemoveContainer" containerID="c1c58181c4c3504ea749bb8aea83f12705ebb3efcb2a368c634e8fd94b1ae91c"
	Oct 02 21:57:40 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:40.288822     774 scope.go:117] "RemoveContainer" containerID="80a05ac4afd1a94da00decaa801cea03d7347aafe0ac5a6914f53e984de953b3"
	Oct 02 21:57:40 default-k8s-diff-port-842185 kubelet[774]: E1002 21:57:40.289090     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zxs4q_kubernetes-dashboard(eb119278-7163-4ac0-b60d-0a6b58e3192a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q" podUID="eb119278-7163-4ac0-b60d-0a6b58e3192a"
	Oct 02 21:57:40 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:40.320223     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qfnfg" podStartSLOduration=13.5712871 podStartE2EDuration="25.320206918s" podCreationTimestamp="2025-10-02 21:57:15 +0000 UTC" firstStartedPulling="2025-10-02 21:57:15.816310463 +0000 UTC m=+16.318179428" lastFinishedPulling="2025-10-02 21:57:27.56523028 +0000 UTC m=+28.067099246" observedRunningTime="2025-10-02 21:57:28.287369701 +0000 UTC m=+28.789238667" watchObservedRunningTime="2025-10-02 21:57:40.320206918 +0000 UTC m=+40.822075892"
	Oct 02 21:57:42 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:42.297639     774 scope.go:117] "RemoveContainer" containerID="279f49e646f24ae456db23c8dd2006c88affb19c3f9b2b81e77d75a954d34756"
	Oct 02 21:57:45 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:45.753255     774 scope.go:117] "RemoveContainer" containerID="80a05ac4afd1a94da00decaa801cea03d7347aafe0ac5a6914f53e984de953b3"
	Oct 02 21:57:45 default-k8s-diff-port-842185 kubelet[774]: E1002 21:57:45.753473     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zxs4q_kubernetes-dashboard(eb119278-7163-4ac0-b60d-0a6b58e3192a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q" podUID="eb119278-7163-4ac0-b60d-0a6b58e3192a"
	Oct 02 21:57:56 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:56.983403     774 scope.go:117] "RemoveContainer" containerID="80a05ac4afd1a94da00decaa801cea03d7347aafe0ac5a6914f53e984de953b3"
	Oct 02 21:57:56 default-k8s-diff-port-842185 kubelet[774]: E1002 21:57:56.984007     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zxs4q_kubernetes-dashboard(eb119278-7163-4ac0-b60d-0a6b58e3192a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q" podUID="eb119278-7163-4ac0-b60d-0a6b58e3192a"
	Oct 02 21:58:08 default-k8s-diff-port-842185 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 21:58:08 default-k8s-diff-port-842185 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 21:58:08 default-k8s-diff-port-842185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [da783ac8f46fe44b10bd4efc52b5f34498ded24088c7777ecc3a07c3ce7bf0ea] <==
	2025/10/02 21:57:27 Using namespace: kubernetes-dashboard
	2025/10/02 21:57:27 Using in-cluster config to connect to apiserver
	2025/10/02 21:57:27 Using secret token for csrf signing
	2025/10/02 21:57:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 21:57:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 21:57:27 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 21:57:27 Generating JWE encryption key
	2025/10/02 21:57:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 21:57:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 21:57:29 Initializing JWE encryption key from synchronized object
	2025/10/02 21:57:29 Creating in-cluster Sidecar client
	2025/10/02 21:57:29 Serving insecurely on HTTP port: 9090
	2025/10/02 21:57:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 21:57:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 21:57:27 Starting overwatch
	
	
	==> storage-provisioner [279f49e646f24ae456db23c8dd2006c88affb19c3f9b2b81e77d75a954d34756] <==
	I1002 21:57:11.854529       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 21:57:41.855840       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [64f0b68c2673bf46005af429475fc83db461da30e3d43a9a7126fb7ac6660eff] <==
	W1002 21:57:42.392448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:57:45.847160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:57:50.107387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:57:53.708462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:57:56.762384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:57:59.791164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:57:59.832478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:57:59.832653       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:57:59.839051       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-842185_be71bb3f-1562-4370-9904-64565b33ad69!
	I1002 21:57:59.839204       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8b17831-0c1d-4950-9708-ff3cf4191d2a", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-842185_be71bb3f-1562-4370-9904-64565b33ad69 became leader
	W1002 21:57:59.847742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:57:59.859818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:57:59.939208       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-842185_be71bb3f-1562-4370-9904-64565b33ad69!
	W1002 21:58:01.864382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:01.870276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:03.873894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:03.880730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:05.883607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:05.901361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:07.907342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:07.917928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:09.924394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:09.930197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:11.934407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:11.946577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-842185 -n default-k8s-diff-port-842185
E1002 21:58:13.286180  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-842185 -n default-k8s-diff-port-842185: exit status 2 (563.760065ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-842185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-842185
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-842185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f",
	        "Created": "2025-10-02T21:55:02.411044691Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1206384,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:56:50.987306258Z",
	            "FinishedAt": "2025-10-02T21:56:49.93626773Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/hostname",
	        "HostsPath": "/var/lib/docker/containers/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/hosts",
	        "LogPath": "/var/lib/docker/containers/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f-json.log",
	        "Name": "/default-k8s-diff-port-842185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-842185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-842185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f",
	                "LowerDir": "/var/lib/docker/overlay2/40b28f1c7350f57a8441dbd43f58861ac9ba8246d909b8ff6eee8c63bf13ca6d-init/diff:/var/lib/docker/overlay2/1a33e071476f501a87a81114c85632fbdfd819b7e5b2fb7e00b397806b28fabb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/40b28f1c7350f57a8441dbd43f58861ac9ba8246d909b8ff6eee8c63bf13ca6d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/40b28f1c7350f57a8441dbd43f58861ac9ba8246d909b8ff6eee8c63bf13ca6d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/40b28f1c7350f57a8441dbd43f58861ac9ba8246d909b8ff6eee8c63bf13ca6d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-842185",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-842185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-842185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-842185",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-842185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4195ddf5c38e97ca3a617684671f692cffc86534ca694752e111f8e379e4ab5e",
	            "SandboxKey": "/var/run/docker/netns/4195ddf5c38e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34221"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34222"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34225"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34223"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34224"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-842185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:e2:de:69:1b:ed",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c75f49aff1de19eab04e162890223324556cf47bb7a7732a62f8c3500b677819",
	                    "EndpointID": "364b6568c0a17bc4504d15826917ea2d45360cf7b5d36aead789ddda7d3b2aeb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-842185",
	                        "724f09ef6992"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842185 -n default-k8s-diff-port-842185
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842185 -n default-k8s-diff-port-842185: exit status 2 (495.310268ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-842185 logs -n 25
E1002 21:58:15.848341  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-842185 logs -n 25: (1.92335616s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-661954                                                                                                                                                                                                                          │ no-preload-661954            │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ delete  │ -p disable-driver-mounts-013352                                                                                                                                                                                                               │ disable-driver-mounts-013352 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:54 UTC │
	│ start   │ -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:54 UTC │ 02 Oct 25 21:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-132977 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:55 UTC │
	│ start   │ -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:55 UTC │ 02 Oct 25 21:56 UTC │
	│ image   │ embed-certs-132977 image list --format=json                                                                                                                                                                                                   │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ pause   │ -p embed-certs-132977 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	│ delete  │ -p embed-certs-132977                                                                                                                                                                                                                         │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ delete  │ -p embed-certs-132977                                                                                                                                                                                                                         │ embed-certs-132977           │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ start   │ -p newest-cni-161621 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-842185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-842185 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-842185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:56 UTC │
	│ start   │ -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:57 UTC │
	│ addons  │ enable metrics-server -p newest-cni-161621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │                     │
	│ stop    │ -p newest-cni-161621 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ addons  │ enable dashboard -p newest-cni-161621 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ start   │ -p newest-cni-161621 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ image   │ newest-cni-161621 image list --format=json                                                                                                                                                                                                    │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ pause   │ -p newest-cni-161621 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │                     │
	│ delete  │ -p newest-cni-161621                                                                                                                                                                                                                          │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ delete  │ -p newest-cni-161621                                                                                                                                                                                                                          │ newest-cni-161621            │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:57 UTC │
	│ start   │ -p auto-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-644857                  │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │                     │
	│ image   │ default-k8s-diff-port-842185 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:58 UTC │
	│ pause   │ -p default-k8s-diff-port-842185 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-842185 │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:57:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:57:44.723831 1212570 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:57:44.723946 1212570 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:57:44.723951 1212570 out.go:374] Setting ErrFile to fd 2...
	I1002 21:57:44.723956 1212570 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:57:44.724222 1212570 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:57:44.724634 1212570 out.go:368] Setting JSON to false
	I1002 21:57:44.725531 1212570 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24002,"bootTime":1759418263,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:57:44.725611 1212570 start.go:140] virtualization:  
	I1002 21:57:44.729567 1212570 out.go:179] * [auto-644857] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:57:44.733789 1212570 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:57:44.733827 1212570 notify.go:221] Checking for updates...
	I1002 21:57:44.740180 1212570 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:57:44.743245 1212570 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:57:44.746209 1212570 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:57:44.749147 1212570 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:57:44.752106 1212570 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:57:44.755591 1212570 config.go:182] Loaded profile config "default-k8s-diff-port-842185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:57:44.755704 1212570 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:57:44.779174 1212570 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:57:44.779307 1212570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:57:44.844882 1212570 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:57:44.835219018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:57:44.844988 1212570 docker.go:319] overlay module found
	I1002 21:57:44.848228 1212570 out.go:179] * Using the docker driver based on user configuration
	I1002 21:57:44.851179 1212570 start.go:306] selected driver: docker
	I1002 21:57:44.851205 1212570 start.go:936] validating driver "docker" against <nil>
	I1002 21:57:44.851217 1212570 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:57:44.851961 1212570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:57:44.916901 1212570 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:57:44.907923324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:57:44.917073 1212570 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:57:44.917318 1212570 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:57:44.920361 1212570 out.go:179] * Using Docker driver with root privileges
	I1002 21:57:44.923287 1212570 cni.go:84] Creating CNI manager for ""
	I1002 21:57:44.923364 1212570 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:57:44.923378 1212570 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:57:44.923456 1212570 start.go:350] cluster config:
	{Name:auto-644857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-644857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1002 21:57:44.927625 1212570 out.go:179] * Starting "auto-644857" primary control-plane node in "auto-644857" cluster
	I1002 21:57:44.930550 1212570 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:57:44.933508 1212570 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:57:44.936405 1212570 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:57:44.936462 1212570 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:57:44.936477 1212570 cache.go:59] Caching tarball of preloaded images
	I1002 21:57:44.936523 1212570 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:57:44.936565 1212570 preload.go:233] Found /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:57:44.936575 1212570 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:57:44.936685 1212570 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/config.json ...
	I1002 21:57:44.936702 1212570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/config.json: {Name:mka62e1620ce851af9f8719107917da2c74da7b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:44.955927 1212570 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:57:44.955950 1212570 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:57:44.955964 1212570 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:57:44.955986 1212570 start.go:361] acquireMachinesLock for auto-644857: {Name:mk23f4e52b42150aa8165a99b1d727a3022ec133 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:57:44.956096 1212570 start.go:365] duration metric: took 88.777µs to acquireMachinesLock for "auto-644857"
	I1002 21:57:44.956137 1212570 start.go:94] Provisioning new machine with config: &{Name:auto-644857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-644857 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:57:44.956208 1212570 start.go:126] createHost starting for "" (driver="docker")
	W1002 21:57:40.664617 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	W1002 21:57:42.666684 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	W1002 21:57:45.166468 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	I1002 21:57:44.959628 1212570 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:57:44.959847 1212570 start.go:160] libmachine.API.Create for "auto-644857" (driver="docker")
	I1002 21:57:44.959889 1212570 client.go:168] LocalClient.Create starting
	I1002 21:57:44.959957 1212570 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem
	I1002 21:57:44.959998 1212570 main.go:141] libmachine: Decoding PEM data...
	I1002 21:57:44.960016 1212570 main.go:141] libmachine: Parsing certificate...
	I1002 21:57:44.960072 1212570 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem
	I1002 21:57:44.960093 1212570 main.go:141] libmachine: Decoding PEM data...
	I1002 21:57:44.960106 1212570 main.go:141] libmachine: Parsing certificate...
	I1002 21:57:44.960456 1212570 cli_runner.go:164] Run: docker network inspect auto-644857 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:57:44.976638 1212570 cli_runner.go:211] docker network inspect auto-644857 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:57:44.976720 1212570 network_create.go:284] running [docker network inspect auto-644857] to gather additional debugging logs...
	I1002 21:57:44.976752 1212570 cli_runner.go:164] Run: docker network inspect auto-644857
	W1002 21:57:44.991632 1212570 cli_runner.go:211] docker network inspect auto-644857 returned with exit code 1
	I1002 21:57:44.991673 1212570 network_create.go:287] error running [docker network inspect auto-644857]: docker network inspect auto-644857: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-644857 not found
	I1002 21:57:44.991686 1212570 network_create.go:289] output of [docker network inspect auto-644857]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-644857 not found
	
	** /stderr **
	I1002 21:57:44.991790 1212570 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:57:45.030261 1212570 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c06a83b4618b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:65:5b:88:54:7f} reservation:<nil>}
	I1002 21:57:45.031176 1212570 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-478a83a9ba8a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:87:10:56:a0:1c} reservation:<nil>}
	I1002 21:57:45.032436 1212570 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb579a1208f3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:a8:08:2c:81:8c} reservation:<nil>}
	I1002 21:57:45.035181 1212570 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 21:57:45.035742 1212570 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-c75f49aff1de IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:4e:75:a2:8f:3e:b6} reservation:<nil>}
	I1002 21:57:45.040289 1212570 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1002 21:57:45.041050 1212570 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a517f0}
	I1002 21:57:45.041084 1212570 network_create.go:124] attempt to create docker network auto-644857 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1002 21:57:45.041161 1212570 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-644857 auto-644857
	I1002 21:57:45.188097 1212570 network_create.go:108] docker network auto-644857 192.168.103.0/24 created
	I1002 21:57:45.188136 1212570 kic.go:121] calculated static IP "192.168.103.2" for the "auto-644857" container
	I1002 21:57:45.188245 1212570 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:57:45.207181 1212570 cli_runner.go:164] Run: docker volume create auto-644857 --label name.minikube.sigs.k8s.io=auto-644857 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:57:45.237685 1212570 oci.go:103] Successfully created a docker volume auto-644857
	I1002 21:57:45.237809 1212570 cli_runner.go:164] Run: docker run --rm --name auto-644857-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-644857 --entrypoint /usr/bin/test -v auto-644857:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:57:45.897963 1212570 oci.go:107] Successfully prepared a docker volume auto-644857
	I1002 21:57:45.898016 1212570 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:57:45.898069 1212570 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:57:45.898152 1212570 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-644857:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	W1002 21:57:47.664293 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	W1002 21:57:50.166526 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	I1002 21:57:50.197258 1212570 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-644857:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.299063277s)
	I1002 21:57:50.197294 1212570 kic.go:203] duration metric: took 4.29922177s to extract preloaded images to volume ...
	W1002 21:57:50.197438 1212570 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:57:50.197561 1212570 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:57:50.252980 1212570 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-644857 --name auto-644857 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-644857 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-644857 --network auto-644857 --ip 192.168.103.2 --volume auto-644857:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:57:50.555831 1212570 cli_runner.go:164] Run: docker container inspect auto-644857 --format={{.State.Running}}
	I1002 21:57:50.574882 1212570 cli_runner.go:164] Run: docker container inspect auto-644857 --format={{.State.Status}}
	I1002 21:57:50.600121 1212570 cli_runner.go:164] Run: docker exec auto-644857 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:57:50.652152 1212570 oci.go:144] the created container "auto-644857" has a running status.
	I1002 21:57:50.652179 1212570 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/auto-644857/id_rsa...
	I1002 21:57:51.612376 1212570 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-992084/.minikube/machines/auto-644857/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:57:51.632137 1212570 cli_runner.go:164] Run: docker container inspect auto-644857 --format={{.State.Status}}
	I1002 21:57:51.648636 1212570 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:57:51.648661 1212570 kic_runner.go:114] Args: [docker exec --privileged auto-644857 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:57:51.692634 1212570 cli_runner.go:164] Run: docker container inspect auto-644857 --format={{.State.Status}}
	I1002 21:57:51.731595 1212570 machine.go:93] provisionDockerMachine start ...
	I1002 21:57:51.731703 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:51.760880 1212570 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:51.761209 1212570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34231 <nil> <nil>}
	I1002 21:57:51.761219 1212570 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:57:51.761830 1212570 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33330->127.0.0.1:34231: read: connection reset by peer
	W1002 21:57:52.665400 1206253 pod_ready.go:104] pod "coredns-66bc5c9577-5hq6c" is not "Ready", error: <nil>
	I1002 21:57:53.664473 1206253 pod_ready.go:94] pod "coredns-66bc5c9577-5hq6c" is "Ready"
	I1002 21:57:53.664503 1206253 pod_ready.go:86] duration metric: took 40.505701955s for pod "coredns-66bc5c9577-5hq6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:53.667201 1206253 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:53.671524 1206253 pod_ready.go:94] pod "etcd-default-k8s-diff-port-842185" is "Ready"
	I1002 21:57:53.671553 1206253 pod_ready.go:86] duration metric: took 4.326787ms for pod "etcd-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:53.673704 1206253 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:53.677772 1206253 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-842185" is "Ready"
	I1002 21:57:53.677802 1206253 pod_ready.go:86] duration metric: took 4.06853ms for pod "kube-apiserver-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:53.680188 1206253 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:53.862666 1206253 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-842185" is "Ready"
	I1002 21:57:53.862696 1206253 pod_ready.go:86] duration metric: took 182.479893ms for pod "kube-controller-manager-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:54.062823 1206253 pod_ready.go:83] waiting for pod "kube-proxy-vhggd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:54.462767 1206253 pod_ready.go:94] pod "kube-proxy-vhggd" is "Ready"
	I1002 21:57:54.462791 1206253 pod_ready.go:86] duration metric: took 399.941226ms for pod "kube-proxy-vhggd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:54.662996 1206253 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:55.067378 1206253 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-842185" is "Ready"
	I1002 21:57:55.067405 1206253 pod_ready.go:86] duration metric: took 404.380478ms for pod "kube-scheduler-default-k8s-diff-port-842185" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:57:55.067418 1206253 pod_ready.go:40] duration metric: took 41.91277134s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:57:55.150795 1206253 start.go:627] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:57:55.154275 1206253 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-842185" cluster and "default" namespace by default
	I1002 21:57:54.893720 1212570 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-644857
	
	I1002 21:57:54.893745 1212570 ubuntu.go:182] provisioning hostname "auto-644857"
	I1002 21:57:54.893835 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:54.911994 1212570 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:54.912312 1212570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34231 <nil> <nil>}
	I1002 21:57:54.912334 1212570 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-644857 && echo "auto-644857" | sudo tee /etc/hostname
	I1002 21:57:55.055809 1212570 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-644857
	
	I1002 21:57:55.055953 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:55.080236 1212570 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:55.080563 1212570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34231 <nil> <nil>}
	I1002 21:57:55.080587 1212570 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-644857' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-644857/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-644857' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:57:55.254229 1212570 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:57:55.254257 1212570 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-992084/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-992084/.minikube}
	I1002 21:57:55.254285 1212570 ubuntu.go:190] setting up certificates
	I1002 21:57:55.254295 1212570 provision.go:84] configureAuth start
	I1002 21:57:55.254354 1212570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-644857
	I1002 21:57:55.289698 1212570 provision.go:143] copyHostCerts
	I1002 21:57:55.289763 1212570 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem, removing ...
	I1002 21:57:55.289778 1212570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem
	I1002 21:57:55.289856 1212570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/ca.pem (1078 bytes)
	I1002 21:57:55.289971 1212570 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem, removing ...
	I1002 21:57:55.289983 1212570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem
	I1002 21:57:55.290014 1212570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/cert.pem (1123 bytes)
	I1002 21:57:55.290195 1212570 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem, removing ...
	I1002 21:57:55.290207 1212570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem
	I1002 21:57:55.290241 1212570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-992084/.minikube/key.pem (1679 bytes)
	I1002 21:57:55.290297 1212570 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem org=jenkins.auto-644857 san=[127.0.0.1 192.168.103.2 auto-644857 localhost minikube]
	I1002 21:57:55.693419 1212570 provision.go:177] copyRemoteCerts
	I1002 21:57:55.693512 1212570 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:57:55.693569 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:55.724287 1212570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34231 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/auto-644857/id_rsa Username:docker}
	I1002 21:57:55.822164 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:57:55.839851 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1002 21:57:55.856702 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:57:55.879553 1212570 provision.go:87] duration metric: took 625.233459ms to configureAuth
	I1002 21:57:55.879629 1212570 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:57:55.879839 1212570 config.go:182] Loaded profile config "auto-644857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:57:55.879954 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:55.898086 1212570 main.go:141] libmachine: Using SSH client type: native
	I1002 21:57:55.898413 1212570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34231 <nil> <nil>}
	I1002 21:57:55.898440 1212570 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:57:56.162603 1212570 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:57:56.162629 1212570 machine.go:96] duration metric: took 4.431015314s to provisionDockerMachine
	I1002 21:57:56.162639 1212570 client.go:171] duration metric: took 11.202738628s to LocalClient.Create
	I1002 21:57:56.162674 1212570 start.go:168] duration metric: took 11.202808304s to libmachine.API.Create "auto-644857"
	I1002 21:57:56.162687 1212570 start.go:294] postStartSetup for "auto-644857" (driver="docker")
	I1002 21:57:56.162697 1212570 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:57:56.162760 1212570 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:57:56.162812 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:56.181173 1212570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34231 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/auto-644857/id_rsa Username:docker}
	I1002 21:57:56.278395 1212570 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:57:56.281765 1212570 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:57:56.281795 1212570 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:57:56.281806 1212570 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/addons for local assets ...
	I1002 21:57:56.281856 1212570 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-992084/.minikube/files for local assets ...
	I1002 21:57:56.281935 1212570 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem -> 9939542.pem in /etc/ssl/certs
	I1002 21:57:56.282054 1212570 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:57:56.289412 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:57:56.307011 1212570 start.go:297] duration metric: took 144.307628ms for postStartSetup
	I1002 21:57:56.307376 1212570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-644857
	I1002 21:57:56.324415 1212570 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/config.json ...
	I1002 21:57:56.324694 1212570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:57:56.324741 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:56.343476 1212570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34231 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/auto-644857/id_rsa Username:docker}
	I1002 21:57:56.438938 1212570 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:57:56.443701 1212570 start.go:129] duration metric: took 11.48747839s to createHost
	I1002 21:57:56.443723 1212570 start.go:84] releasing machines lock for "auto-644857", held for 11.487613182s
	I1002 21:57:56.443792 1212570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-644857
	I1002 21:57:56.461385 1212570 ssh_runner.go:195] Run: cat /version.json
	I1002 21:57:56.461434 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:56.461714 1212570 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:57:56.461779 1212570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-644857
	I1002 21:57:56.481093 1212570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34231 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/auto-644857/id_rsa Username:docker}
	I1002 21:57:56.495738 1212570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34231 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/auto-644857/id_rsa Username:docker}
	I1002 21:57:56.680114 1212570 ssh_runner.go:195] Run: systemctl --version
	I1002 21:57:56.687465 1212570 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:57:56.735764 1212570 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:57:56.740128 1212570 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:57:56.740235 1212570 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:57:56.772382 1212570 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 21:57:56.772406 1212570 start.go:496] detecting cgroup driver to use...
	I1002 21:57:56.772459 1212570 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:57:56.772515 1212570 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:57:56.790212 1212570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:57:56.803090 1212570 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:57:56.803180 1212570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:57:56.821045 1212570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:57:56.841043 1212570 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:57:56.966269 1212570 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:57:57.104632 1212570 docker.go:234] disabling docker service ...
	I1002 21:57:57.104751 1212570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:57:57.133126 1212570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:57:57.147152 1212570 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:57:57.272586 1212570 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:57:57.410196 1212570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:57:57.424353 1212570 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:57:57.441188 1212570 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:57:57.441281 1212570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:57.450652 1212570 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:57:57.450726 1212570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:57.460769 1212570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:57.469862 1212570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:57.479142 1212570 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:57:57.486858 1212570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:57.495437 1212570 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:57.509162 1212570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:57:57.518366 1212570 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:57:57.526439 1212570 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:57:57.533423 1212570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:57:57.657900 1212570 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:57:57.796290 1212570 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:57:57.796365 1212570 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:57:57.800566 1212570 start.go:564] Will wait 60s for crictl version
	I1002 21:57:57.800681 1212570 ssh_runner.go:195] Run: which crictl
	I1002 21:57:57.804488 1212570 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:57:57.830398 1212570 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:57:57.830508 1212570 ssh_runner.go:195] Run: crio --version
	I1002 21:57:57.859174 1212570 ssh_runner.go:195] Run: crio --version
	I1002 21:57:57.896744 1212570 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:57:57.899611 1212570 cli_runner.go:164] Run: docker network inspect auto-644857 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:57:57.916047 1212570 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1002 21:57:57.919729 1212570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:57:57.929465 1212570 kubeadm.go:883] updating cluster {Name:auto-644857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-644857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:57:57.929585 1212570 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:57:57.929640 1212570 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:57:57.963445 1212570 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:57:57.963481 1212570 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:57:57.963540 1212570 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:57:57.991050 1212570 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:57:57.991075 1212570 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:57:57.991083 1212570 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1002 21:57:57.991175 1212570 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-644857 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-644857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:57:57.991261 1212570 ssh_runner.go:195] Run: crio config
	I1002 21:57:58.066825 1212570 cni.go:84] Creating CNI manager for ""
	I1002 21:57:58.066850 1212570 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:57:58.066865 1212570 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:57:58.066906 1212570 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-644857 NodeName:auto-644857 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:57:58.067061 1212570 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-644857"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:57:58.067154 1212570 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:57:58.075669 1212570 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:57:58.075766 1212570 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:57:58.083600 1212570 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1002 21:57:58.097521 1212570 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:57:58.110659 1212570 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1002 21:57:58.124037 1212570 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:57:58.127805 1212570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:57:58.137818 1212570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:57:58.257031 1212570 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:57:58.273159 1212570 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857 for IP: 192.168.103.2
	I1002 21:57:58.273242 1212570 certs.go:195] generating shared ca certs ...
	I1002 21:57:58.273274 1212570 certs.go:227] acquiring lock for ca certs: {Name:mk681d09108a32bebe98576d436ca7212a0df8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:58.273469 1212570 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key
	I1002 21:57:58.273550 1212570 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key
	I1002 21:57:58.273585 1212570 certs.go:257] generating profile certs ...
	I1002 21:57:58.273664 1212570 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.key
	I1002 21:57:58.273712 1212570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt with IP's: []
	I1002 21:57:58.675031 1212570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt ...
	I1002 21:57:58.675063 1212570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt: {Name:mkea4edef2453de54aa0ca1560115255cdac1127 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:58.675272 1212570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.key ...
	I1002 21:57:58.675285 1212570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.key: {Name:mk0c46468e39fbfee43da4d3902411d6edc46b69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:58.675388 1212570 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.key.5906b6e8
	I1002 21:57:58.675409 1212570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.crt.5906b6e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1002 21:57:59.111375 1212570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.crt.5906b6e8 ...
	I1002 21:57:59.111405 1212570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.crt.5906b6e8: {Name:mk1cb990fd7550717cca2ff726307498c7c36578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:59.111600 1212570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.key.5906b6e8 ...
	I1002 21:57:59.111617 1212570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.key.5906b6e8: {Name:mk29c2e47081200f79809af00e08e4ffe78f9477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:57:59.111706 1212570 certs.go:382] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.crt.5906b6e8 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.crt
	I1002 21:57:59.111792 1212570 certs.go:386] copying /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.key.5906b6e8 -> /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.key
	I1002 21:57:59.111882 1212570 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.key
	I1002 21:57:59.111900 1212570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.crt with IP's: []
	I1002 21:58:00.033811 1212570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.crt ...
	I1002 21:58:00.033847 1212570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.crt: {Name:mk6303dd6d6aefa91fb1d78cde023fb7c0821c54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:58:00.034139 1212570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.key ...
	I1002 21:58:00.034151 1212570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.key: {Name:mk7a713156d5bc785fdcdaf6b47a2db89787d30d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:58:00.034425 1212570 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem (1338 bytes)
	W1002 21:58:00.034467 1212570 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954_empty.pem, impossibly tiny 0 bytes
	I1002 21:58:00.034477 1212570 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 21:58:00.034504 1212570 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:58:00.034531 1212570 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:58:00.034558 1212570 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/certs/key.pem (1679 bytes)
	I1002 21:58:00.034611 1212570 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem (1708 bytes)
	I1002 21:58:00.035295 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:58:00.160723 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:58:00.254089 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:58:00.311165 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:58:00.363072 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 21:58:00.400280 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:58:00.424960 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:58:00.450168 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 21:58:00.481242 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/ssl/certs/9939542.pem --> /usr/share/ca-certificates/9939542.pem (1708 bytes)
	I1002 21:58:00.507775 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:58:00.538216 1212570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-992084/.minikube/certs/993954.pem --> /usr/share/ca-certificates/993954.pem (1338 bytes)
	I1002 21:58:00.561351 1212570 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:58:00.578402 1212570 ssh_runner.go:195] Run: openssl version
	I1002 21:58:00.587975 1212570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:58:00.597505 1212570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:58:00.601793 1212570 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:58:00.601862 1212570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:58:00.645375 1212570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:58:00.653989 1212570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/993954.pem && ln -fs /usr/share/ca-certificates/993954.pem /etc/ssl/certs/993954.pem"
	I1002 21:58:00.662618 1212570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/993954.pem
	I1002 21:58:00.666579 1212570 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:36 /usr/share/ca-certificates/993954.pem
	I1002 21:58:00.666649 1212570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/993954.pem
	I1002 21:58:00.714653 1212570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/993954.pem /etc/ssl/certs/51391683.0"
	I1002 21:58:00.724226 1212570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9939542.pem && ln -fs /usr/share/ca-certificates/9939542.pem /etc/ssl/certs/9939542.pem"
	I1002 21:58:00.733023 1212570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9939542.pem
	I1002 21:58:00.736723 1212570 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:36 /usr/share/ca-certificates/9939542.pem
	I1002 21:58:00.736798 1212570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9939542.pem
	I1002 21:58:00.778304 1212570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9939542.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:58:00.786764 1212570 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:58:00.790288 1212570 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:58:00.790341 1212570 kubeadm.go:400] StartCluster: {Name:auto-644857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-644857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:58:00.790415 1212570 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:58:00.790476 1212570 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:58:00.817121 1212570 cri.go:89] found id: ""
	I1002 21:58:00.817270 1212570 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:58:00.824983 1212570 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:58:00.832671 1212570 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:58:00.832742 1212570 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:58:00.840996 1212570 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:58:00.841015 1212570 kubeadm.go:157] found existing configuration files:
	
	I1002 21:58:00.841066 1212570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:58:00.849076 1212570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:58:00.849139 1212570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:58:00.856817 1212570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:58:00.864983 1212570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:58:00.865050 1212570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:58:00.872574 1212570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:58:00.880651 1212570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:58:00.880732 1212570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:58:00.888463 1212570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:58:00.896198 1212570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:58:00.896261 1212570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:58:00.903664 1212570 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:58:00.942305 1212570 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:58:00.942618 1212570 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:58:00.965625 1212570 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:58:00.965705 1212570 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:58:00.965747 1212570 kubeadm.go:318] OS: Linux
	I1002 21:58:00.965802 1212570 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:58:00.965858 1212570 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:58:00.965926 1212570 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:58:00.965981 1212570 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:58:00.966060 1212570 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:58:00.966117 1212570 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:58:00.966169 1212570 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:58:00.966223 1212570 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:58:00.966274 1212570 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:58:01.033221 1212570 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:58:01.033357 1212570 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:58:01.033466 1212570 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:58:01.046483 1212570 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:58:01.053924 1212570 out.go:252]   - Generating certificates and keys ...
	I1002 21:58:01.054177 1212570 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:58:01.054282 1212570 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:58:02.681568 1212570 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:58:03.484813 1212570 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:58:03.582389 1212570 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:58:03.893208 1212570 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:58:04.214831 1212570 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:58:04.215037 1212570 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-644857 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1002 21:58:04.682864 1212570 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:58:04.683232 1212570 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-644857 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1002 21:58:05.462677 1212570 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:58:07.217106 1212570 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:58:07.287366 1212570 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:58:07.287669 1212570 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:58:07.433155 1212570 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:58:07.797943 1212570 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:58:08.261698 1212570 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:58:08.309832 1212570 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:58:08.486617 1212570 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:58:08.490411 1212570 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:58:08.490505 1212570 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:58:08.494162 1212570 out.go:252]   - Booting up control plane ...
	I1002 21:58:08.494298 1212570 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:58:08.494412 1212570 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:58:08.494494 1212570 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:58:08.514305 1212570 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:58:08.514418 1212570 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:58:08.530487 1212570 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:58:08.530590 1212570 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:58:08.530633 1212570 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:58:08.691452 1212570 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:58:08.691595 1212570 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:58:09.692786 1212570 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001408756s
	I1002 21:58:09.696778 1212570 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:58:09.696882 1212570 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1002 21:58:09.697177 1212570 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:58:09.697268 1212570 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.304261389Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6a0f1f64-734e-43b2-b3eb-9a6083b62677 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.308797155Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f3d1487f-51b0-448d-87dc-1544e499f6d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.309085082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.32034365Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.321197937Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2ad01c404d574b3f14660b5671b4019833b26f5c8eda22aa3213b68143b9cd91/merged/etc/passwd: no such file or directory"
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.321445807Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2ad01c404d574b3f14660b5671b4019833b26f5c8eda22aa3213b68143b9cd91/merged/etc/group: no such file or directory"
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.321969937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.347348298Z" level=info msg="Created container 64f0b68c2673bf46005af429475fc83db461da30e3d43a9a7126fb7ac6660eff: kube-system/storage-provisioner/storage-provisioner" id=f3d1487f-51b0-448d-87dc-1544e499f6d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.349374526Z" level=info msg="Starting container: 64f0b68c2673bf46005af429475fc83db461da30e3d43a9a7126fb7ac6660eff" id=0984bb45-bfdf-460a-afcf-a1d4ff9fb39d name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:57:42 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:42.354117737Z" level=info msg="Started container" PID=1636 containerID=64f0b68c2673bf46005af429475fc83db461da30e3d43a9a7126fb7ac6660eff description=kube-system/storage-provisioner/storage-provisioner id=0984bb45-bfdf-460a-afcf-a1d4ff9fb39d name=/runtime.v1.RuntimeService/StartContainer sandboxID=afe6e7f77d958110e6f8f0d523c52f7347197ae6bf33c52bedf367df339b0f75
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.010414247Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.019417834Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.019578839Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.019662537Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.025039873Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.02521536Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.025317051Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.035785035Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.035825625Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.035850001Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.042191554Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.042231183Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.042255642Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.047165905Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 21:57:52 default-k8s-diff-port-842185 crio[647]: time="2025-10-02T21:57:52.047356637Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	64f0b68c2673b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           32 seconds ago       Running             storage-provisioner         2                   afe6e7f77d958       storage-provisioner                                    kube-system
	80a05ac4afd1a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           35 seconds ago       Exited              dashboard-metrics-scraper   2                   66ee016490c6a       dashboard-metrics-scraper-6ffb444bf9-zxs4q             kubernetes-dashboard
	da783ac8f46fe       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   afe9d586bf532       kubernetes-dashboard-855c9754f9-qfnfg                  kubernetes-dashboard
	a70dec87e9599       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   fc5f162800afb       busybox                                                default
	930f916f8aefd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   e8b9c2da2fba2       kindnet-qb4vm                                          kube-system
	33c80e1a2f69c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   1896666969f6c       coredns-66bc5c9577-5hq6c                               kube-system
	279f49e646f24       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   afe6e7f77d958       storage-provisioner                                    kube-system
	8463b0dcd75cd       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   9dcb281719246       kube-proxy-vhggd                                       kube-system
	73e02330ac81e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   1e78517030124       kube-scheduler-default-k8s-diff-port-842185            kube-system
	bcc8f7a135946       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   a1f342a8f27ad       kube-controller-manager-default-k8s-diff-port-842185   kube-system
	e26cd38eb38f1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   1bdc760ce9420       kube-apiserver-default-k8s-diff-port-842185            kube-system
	e2012db846f04       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   5a72d5f08b53a       etcd-default-k8s-diff-port-842185                      kube-system
	
	
	==> coredns [33c80e1a2f69cfb800549e3ae649e668646a0d83184ad0cf1e11f9d7f5043da4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58695 - 56081 "HINFO IN 5251318617325741392.7856113883525916999. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022590159s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-842185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-842185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=default-k8s-diff-port-842185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_55_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:55:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-842185
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:58:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:58:01 +0000   Thu, 02 Oct 2025 21:55:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:58:01 +0000   Thu, 02 Oct 2025 21:55:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:58:01 +0000   Thu, 02 Oct 2025 21:55:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:58:01 +0000   Thu, 02 Oct 2025 21:56:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-842185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 8819512bab6f4089b6ec00d17025ca74
	  System UUID:                aa48841e-0403-43a7-8420-f3cab19a557a
	  Boot ID:                    1e17b617-dd1a-4f47-9ea7-9f8af1cb03ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 coredns-66bc5c9577-5hq6c                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m35s
	  kube-system                 etcd-default-k8s-diff-port-842185                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m40s
	  kube-system                 kindnet-qb4vm                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m35s
	  kube-system                 kube-apiserver-default-k8s-diff-port-842185             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m42s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-842185    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 kube-proxy-vhggd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-scheduler-default-k8s-diff-port-842185             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zxs4q              0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qfnfg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m33s                  kube-proxy       
	  Normal   Starting                 62s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m52s (x8 over 2m52s)  kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m52s (x8 over 2m52s)  kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m52s (x8 over 2m52s)  kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m40s                  kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m40s                  kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m40s                  kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m40s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m36s                  node-controller  Node default-k8s-diff-port-842185 event: Registered Node default-k8s-diff-port-842185 in Controller
	  Normal   NodeReady                113s                   kubelet          Node default-k8s-diff-port-842185 status is now: NodeReady
	  Normal   Starting                 76s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 76s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  75s (x8 over 76s)      kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    75s (x8 over 76s)      kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     75s (x8 over 76s)      kubelet          Node default-k8s-diff-port-842185 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           61s                    node-controller  Node default-k8s-diff-port-842185 event: Registered Node default-k8s-diff-port-842185 in Controller
	
	
	==> dmesg <==
	[ +50.963919] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:23] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:24] overlayfs: idmapped layers are currently not supported
	[ +23.275053] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +39.734560] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[  +6.128952] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[  +5.098616] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:55] overlayfs: idmapped layers are currently not supported
	[  +9.226554] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[ +27.661855] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e2012db846f04fedc4a91921d9a9ee6083af8a50e9ac894b9c0ef5f131609e50] <==
	{"level":"warn","ts":"2025-10-02T21:57:08.740688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.760046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.785874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.818529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.845565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.862291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.876095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.896688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:08.918003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.000723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.057218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.091667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.109025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.132014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.148273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.163639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.181760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.227318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.249843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.264741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.304280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.311382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.336556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.351206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:57:09.406688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38934","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:58:15 up  6:40,  0 user,  load average: 5.14, 4.20, 2.73
	Linux default-k8s-diff-port-842185 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [930f916f8aefd1566d89657d7223a999b3fb9270aa00ef267c7cb03f1708cb13] <==
	I1002 21:57:11.811580       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:57:11.812641       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 21:57:11.812779       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:57:11.812798       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:57:11.812810       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:57:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:57:12.011161       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:57:12.011184       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:57:12.011195       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:57:12.011307       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 21:57:42.011382       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 21:57:42.011475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 21:57:42.011661       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 21:57:42.014262       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 21:57:43.611377       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:57:43.611478       1 metrics.go:72] Registering metrics
	I1002 21:57:43.611621       1 controller.go:711] "Syncing nftables rules"
	I1002 21:57:52.010146       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:57:52.010193       1 main.go:301] handling current node
	I1002 21:58:02.009899       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:58:02.009948       1 main.go:301] handling current node
	I1002 21:58:12.018428       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 21:58:12.018463       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e26cd38eb38f190780ac20a4962aaad22e5e6f78b418d8570ba1272bc0e6fcd1] <==
	I1002 21:57:10.567194       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 21:57:10.567232       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 21:57:10.567408       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 21:57:10.567455       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 21:57:10.577796       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 21:57:10.582485       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:57:10.584770       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 21:57:10.615120       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 21:57:10.615150       1 policy_source.go:240] refreshing policies
	I1002 21:57:10.622148       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1002 21:57:10.627621       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 21:57:10.654350       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:57:10.655587       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 21:57:10.702146       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:57:10.899454       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:57:11.269207       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:57:12.188992       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 21:57:12.367048       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:57:12.471465       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:57:12.511158       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:57:12.747526       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.17.134"}
	I1002 21:57:12.871173       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.178.55"}
	I1002 21:57:15.047212       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 21:57:15.240187       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 21:57:15.289904       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bcc8f7a1359466ea9b86d847aaee545ff42e89546c49da9e5a234add61b6e16c] <==
	I1002 21:57:14.881156       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 21:57:14.882585       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 21:57:14.882905       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:57:14.882973       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 21:57:14.883075       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 21:57:14.885772       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 21:57:14.889482       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 21:57:14.889590       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 21:57:14.889614       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 21:57:14.889619       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 21:57:14.889625       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 21:57:14.898317       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:57:14.898413       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:57:14.898444       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:57:14.901155       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 21:57:14.914239       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:57:14.914294       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 21:57:14.925074       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:57:14.927507       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 21:57:14.930120       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 21:57:14.935420       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 21:57:14.935510       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 21:57:14.935552       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 21:57:14.935435       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 21:57:14.940871       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [8463b0dcd75cd6f75372c417bc075beb8b056e1c586c005ab100861750cc9798] <==
	I1002 21:57:11.922956       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:57:12.203828       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:57:12.507049       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:57:12.516962       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 21:57:12.517137       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:57:12.955945       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:57:12.956063       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:57:12.962890       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:57:12.963243       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:57:12.963386       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:57:12.964598       1 config.go:200] "Starting service config controller"
	I1002 21:57:12.964658       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:57:12.964701       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:57:12.964728       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:57:12.964762       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:57:12.964789       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:57:12.965416       1 config.go:309] "Starting node config controller"
	I1002 21:57:12.970885       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:57:12.970984       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:57:13.064832       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 21:57:13.064940       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:57:13.064966       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [73e02330ac81eb5bb08428b429c32a892fcf1a19e76b8a5bcd1f6bca9df0bb91] <==
	I1002 21:57:07.605428       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:57:10.704391       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:57:10.704422       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:57:10.749258       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 21:57:10.749317       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 21:57:10.749371       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:57:10.749381       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:57:10.749407       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:57:10.749415       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:57:10.758390       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:57:10.758467       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:57:10.855512       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 21:57:10.855665       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:57:10.855465       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 02 21:57:15 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:15.561589     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhbcg\" (UniqueName: \"kubernetes.io/projected/eb119278-7163-4ac0-b60d-0a6b58e3192a-kube-api-access-jhbcg\") pod \"dashboard-metrics-scraper-6ffb444bf9-zxs4q\" (UID: \"eb119278-7163-4ac0-b60d-0a6b58e3192a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q"
	Oct 02 21:57:15 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:15.561623     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eb119278-7163-4ac0-b60d-0a6b58e3192a-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-zxs4q\" (UID: \"eb119278-7163-4ac0-b60d-0a6b58e3192a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q"
	Oct 02 21:57:15 default-k8s-diff-port-842185 kubelet[774]: W1002 21:57:15.785047     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/crio-66ee016490c6ae26153eaf3367a45c8696c55c4f0080ea46feb67bfc983ff7f9 WatchSource:0}: Error finding container 66ee016490c6ae26153eaf3367a45c8696c55c4f0080ea46feb67bfc983ff7f9: Status 404 returned error can't find the container with id 66ee016490c6ae26153eaf3367a45c8696c55c4f0080ea46feb67bfc983ff7f9
	Oct 02 21:57:15 default-k8s-diff-port-842185 kubelet[774]: W1002 21:57:15.811356     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/724f09ef699244f8ee16961442cd6ba5b8a71a2727d21fb96e0387255faa191f/crio-afe9d586bf532dcf9e707ca215de7954e05ef56fbd84d8922b7c403badb523df WatchSource:0}: Error finding container afe9d586bf532dcf9e707ca215de7954e05ef56fbd84d8922b7c403badb523df: Status 404 returned error can't find the container with id afe9d586bf532dcf9e707ca215de7954e05ef56fbd84d8922b7c403badb523df
	Oct 02 21:57:22 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:22.228429     774 scope.go:117] "RemoveContainer" containerID="df1752540859e240340bf9726d6c278bc10bc51afa5f38349f54b935ce913a77"
	Oct 02 21:57:23 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:23.233526     774 scope.go:117] "RemoveContainer" containerID="df1752540859e240340bf9726d6c278bc10bc51afa5f38349f54b935ce913a77"
	Oct 02 21:57:23 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:23.234071     774 scope.go:117] "RemoveContainer" containerID="c1c58181c4c3504ea749bb8aea83f12705ebb3efcb2a368c634e8fd94b1ae91c"
	Oct 02 21:57:23 default-k8s-diff-port-842185 kubelet[774]: E1002 21:57:23.234266     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zxs4q_kubernetes-dashboard(eb119278-7163-4ac0-b60d-0a6b58e3192a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q" podUID="eb119278-7163-4ac0-b60d-0a6b58e3192a"
	Oct 02 21:57:24 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:24.241900     774 scope.go:117] "RemoveContainer" containerID="c1c58181c4c3504ea749bb8aea83f12705ebb3efcb2a368c634e8fd94b1ae91c"
	Oct 02 21:57:24 default-k8s-diff-port-842185 kubelet[774]: E1002 21:57:24.242083     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zxs4q_kubernetes-dashboard(eb119278-7163-4ac0-b60d-0a6b58e3192a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q" podUID="eb119278-7163-4ac0-b60d-0a6b58e3192a"
	Oct 02 21:57:25 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:25.753002     774 scope.go:117] "RemoveContainer" containerID="c1c58181c4c3504ea749bb8aea83f12705ebb3efcb2a368c634e8fd94b1ae91c"
	Oct 02 21:57:25 default-k8s-diff-port-842185 kubelet[774]: E1002 21:57:25.753207     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zxs4q_kubernetes-dashboard(eb119278-7163-4ac0-b60d-0a6b58e3192a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q" podUID="eb119278-7163-4ac0-b60d-0a6b58e3192a"
	Oct 02 21:57:39 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:39.982618     774 scope.go:117] "RemoveContainer" containerID="c1c58181c4c3504ea749bb8aea83f12705ebb3efcb2a368c634e8fd94b1ae91c"
	Oct 02 21:57:40 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:40.288355     774 scope.go:117] "RemoveContainer" containerID="c1c58181c4c3504ea749bb8aea83f12705ebb3efcb2a368c634e8fd94b1ae91c"
	Oct 02 21:57:40 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:40.288822     774 scope.go:117] "RemoveContainer" containerID="80a05ac4afd1a94da00decaa801cea03d7347aafe0ac5a6914f53e984de953b3"
	Oct 02 21:57:40 default-k8s-diff-port-842185 kubelet[774]: E1002 21:57:40.289090     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zxs4q_kubernetes-dashboard(eb119278-7163-4ac0-b60d-0a6b58e3192a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q" podUID="eb119278-7163-4ac0-b60d-0a6b58e3192a"
	Oct 02 21:57:40 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:40.320223     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qfnfg" podStartSLOduration=13.5712871 podStartE2EDuration="25.320206918s" podCreationTimestamp="2025-10-02 21:57:15 +0000 UTC" firstStartedPulling="2025-10-02 21:57:15.816310463 +0000 UTC m=+16.318179428" lastFinishedPulling="2025-10-02 21:57:27.56523028 +0000 UTC m=+28.067099246" observedRunningTime="2025-10-02 21:57:28.287369701 +0000 UTC m=+28.789238667" watchObservedRunningTime="2025-10-02 21:57:40.320206918 +0000 UTC m=+40.822075892"
	Oct 02 21:57:42 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:42.297639     774 scope.go:117] "RemoveContainer" containerID="279f49e646f24ae456db23c8dd2006c88affb19c3f9b2b81e77d75a954d34756"
	Oct 02 21:57:45 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:45.753255     774 scope.go:117] "RemoveContainer" containerID="80a05ac4afd1a94da00decaa801cea03d7347aafe0ac5a6914f53e984de953b3"
	Oct 02 21:57:45 default-k8s-diff-port-842185 kubelet[774]: E1002 21:57:45.753473     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zxs4q_kubernetes-dashboard(eb119278-7163-4ac0-b60d-0a6b58e3192a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q" podUID="eb119278-7163-4ac0-b60d-0a6b58e3192a"
	Oct 02 21:57:56 default-k8s-diff-port-842185 kubelet[774]: I1002 21:57:56.983403     774 scope.go:117] "RemoveContainer" containerID="80a05ac4afd1a94da00decaa801cea03d7347aafe0ac5a6914f53e984de953b3"
	Oct 02 21:57:56 default-k8s-diff-port-842185 kubelet[774]: E1002 21:57:56.984007     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zxs4q_kubernetes-dashboard(eb119278-7163-4ac0-b60d-0a6b58e3192a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zxs4q" podUID="eb119278-7163-4ac0-b60d-0a6b58e3192a"
	Oct 02 21:58:08 default-k8s-diff-port-842185 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 21:58:08 default-k8s-diff-port-842185 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 21:58:08 default-k8s-diff-port-842185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [da783ac8f46fe44b10bd4efc52b5f34498ded24088c7777ecc3a07c3ce7bf0ea] <==
	2025/10/02 21:57:27 Using namespace: kubernetes-dashboard
	2025/10/02 21:57:27 Using in-cluster config to connect to apiserver
	2025/10/02 21:57:27 Using secret token for csrf signing
	2025/10/02 21:57:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 21:57:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 21:57:27 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 21:57:27 Generating JWE encryption key
	2025/10/02 21:57:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 21:57:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 21:57:29 Initializing JWE encryption key from synchronized object
	2025/10/02 21:57:29 Creating in-cluster Sidecar client
	2025/10/02 21:57:29 Serving insecurely on HTTP port: 9090
	2025/10/02 21:57:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 21:57:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 21:57:27 Starting overwatch
	
	
	==> storage-provisioner [279f49e646f24ae456db23c8dd2006c88affb19c3f9b2b81e77d75a954d34756] <==
	I1002 21:57:11.854529       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 21:57:41.855840       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [64f0b68c2673bf46005af429475fc83db461da30e3d43a9a7126fb7ac6660eff] <==
	W1002 21:57:56.762384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:57:59.791164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:57:59.832478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:57:59.832653       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:57:59.839051       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-842185_be71bb3f-1562-4370-9904-64565b33ad69!
	I1002 21:57:59.839204       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8b17831-0c1d-4950-9708-ff3cf4191d2a", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-842185_be71bb3f-1562-4370-9904-64565b33ad69 became leader
	W1002 21:57:59.847742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:57:59.859818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:57:59.939208       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-842185_be71bb3f-1562-4370-9904-64565b33ad69!
	W1002 21:58:01.864382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:01.870276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:03.873894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:03.880730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:05.883607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:05.901361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:07.907342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:07.917928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:09.924394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:09.930197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:11.934407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:11.946577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:13.954276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:13.964624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:15.974366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:58:15.985529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-842185 -n default-k8s-diff-port-842185
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-842185 -n default-k8s-diff-port-842185: exit status 2 (555.071998ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-842185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.76s)
E1002 22:04:08.721940  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:10.720352  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:10.726801  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:10.738325  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:10.759769  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:10.801131  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:10.882521  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:11.044687  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:11.366191  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:12.008575  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:13.290190  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:15.851944  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:20.974005  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:25.746925  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:31.215910  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/auto-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:38.873889  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/kindnet-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:38.880281  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/kindnet-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:38.891772  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/kindnet-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:38.913165  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/kindnet-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:38.954545  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/kindnet-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:39.036098  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/kindnet-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:39.197652  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/kindnet-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:39.519883  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/kindnet-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:40.161979  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/kindnet-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (252/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 19.61
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 16.63
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.69
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 171.01
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 10.85
48 TestAddons/StoppedEnableDisable 12.38
49 TestCertOptions 40.57
50 TestCertExpiration 234.35
59 TestErrorSpam/setup 33.43
60 TestErrorSpam/start 0.77
61 TestErrorSpam/status 1.1
62 TestErrorSpam/pause 6.42
63 TestErrorSpam/unpause 6.01
64 TestErrorSpam/stop 1.44
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 78.79
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 30.09
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.41
76 TestFunctional/serial/CacheCmd/cache/add_local 1.09
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.83
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.15
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.22
84 TestFunctional/serial/ExtraConfig 34.41
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.51
87 TestFunctional/serial/LogsFileCmd 1.45
88 TestFunctional/serial/InvalidService 4.2
90 TestFunctional/parallel/ConfigCmd 0.45
92 TestFunctional/parallel/DryRun 0.45
93 TestFunctional/parallel/InternationalLanguage 0.2
94 TestFunctional/parallel/StatusCmd 0.99
99 TestFunctional/parallel/AddonsCmd 0.15
102 TestFunctional/parallel/SSHCmd 0.72
103 TestFunctional/parallel/CpCmd 2.34
105 TestFunctional/parallel/FileSync 0.27
106 TestFunctional/parallel/CertSync 1.68
110 TestFunctional/parallel/NodeLabels 0.09
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
114 TestFunctional/parallel/License 0.26
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.1
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
129 TestFunctional/parallel/MountCmd/any-port 24
130 TestFunctional/parallel/MountCmd/specific-port 1.83
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.27
132 TestFunctional/parallel/ServiceCmd/List 1.32
133 TestFunctional/parallel/ServiceCmd/JSONOutput 1.31
137 TestFunctional/parallel/Version/short 0.06
138 TestFunctional/parallel/Version/components 1.07
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.77
144 TestFunctional/parallel/ImageCommands/Setup 0.6
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 216.71
163 TestMultiControlPlane/serial/DeployApp 6.58
164 TestMultiControlPlane/serial/PingHostFromPods 1.55
165 TestMultiControlPlane/serial/AddWorkerNode 61.64
166 TestMultiControlPlane/serial/NodeLabels 0.1
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.06
168 TestMultiControlPlane/serial/CopyFile 19.12
169 TestMultiControlPlane/serial/StopSecondaryNode 12.66
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
171 TestMultiControlPlane/serial/RestartSecondaryNode 20.84
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.05
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 130.95
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.56
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
176 TestMultiControlPlane/serial/StopCluster 35.59
177 TestMultiControlPlane/serial/RestartCluster 75.28
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
179 TestMultiControlPlane/serial/AddSecondaryNode 93.8
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.14
184 TestJSONOutput/start/Command 77.43
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.72
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.24
209 TestKicCustomNetwork/create_custom_network 41.12
210 TestKicCustomNetwork/use_default_bridge_network 37.31
211 TestKicExistingNetwork 36.08
212 TestKicCustomSubnet 35.24
213 TestKicStaticIP 38.19
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 72.42
218 TestMountStart/serial/StartWithMountFirst 7.76
219 TestMountStart/serial/VerifyMountFirst 0.25
220 TestMountStart/serial/StartWithMountSecond 9.15
221 TestMountStart/serial/VerifyMountSecond 0.29
222 TestMountStart/serial/DeleteFirst 1.64
223 TestMountStart/serial/VerifyMountPostDelete 0.28
224 TestMountStart/serial/Stop 1.22
225 TestMountStart/serial/RestartStopped 8.37
226 TestMountStart/serial/VerifyMountPostStop 0.26
229 TestMultiNode/serial/FreshStart2Nodes 134.59
230 TestMultiNode/serial/DeployApp2Nodes 5.08
231 TestMultiNode/serial/PingHostFrom2Pods 0.93
232 TestMultiNode/serial/AddNode 58.83
233 TestMultiNode/serial/MultiNodeLabels 0.09
234 TestMultiNode/serial/ProfileList 0.69
235 TestMultiNode/serial/CopyFile 10.34
236 TestMultiNode/serial/StopNode 2.27
237 TestMultiNode/serial/StartAfterStop 7.77
238 TestMultiNode/serial/RestartKeepsNodes 76.83
239 TestMultiNode/serial/DeleteNode 5.37
240 TestMultiNode/serial/StopMultiNode 23.94
241 TestMultiNode/serial/RestartMultiNode 50.35
242 TestMultiNode/serial/ValidateNameConflict 40.57
249 TestScheduledStopUnix 110.64
252 TestInsufficientStorage 14.3
253 TestRunningBinaryUpgrade 56.72
255 TestKubernetesUpgrade 356.42
256 TestMissingContainerUpgrade 115.82
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestNoKubernetes/serial/StartWithK8s 47.57
260 TestNoKubernetes/serial/StartWithStopK8s 9.42
261 TestNoKubernetes/serial/Start 9.37
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
263 TestNoKubernetes/serial/ProfileList 0.66
264 TestNoKubernetes/serial/Stop 1.21
265 TestNoKubernetes/serial/StartNoArgs 6.9
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
267 TestStoppedBinaryUpgrade/Setup 1.11
268 TestStoppedBinaryUpgrade/Upgrade 53.94
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.21
278 TestPause/serial/Start 81.87
279 TestPause/serial/SecondStartNoReconfiguration 18.54
288 TestNetworkPlugins/group/false 3.6
293 TestStartStop/group/old-k8s-version/serial/FirstStart 64.83
294 TestStartStop/group/old-k8s-version/serial/DeployApp 9.57
296 TestStartStop/group/old-k8s-version/serial/Stop 11.96
297 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
298 TestStartStop/group/old-k8s-version/serial/SecondStart 56.34
300 TestStartStop/group/no-preload/serial/FirstStart 70.6
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
305 TestStartStop/group/no-preload/serial/DeployApp 8.4
307 TestStartStop/group/embed-certs/serial/FirstStart 86.13
309 TestStartStop/group/no-preload/serial/Stop 12.08
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
311 TestStartStop/group/no-preload/serial/SecondStart 60.19
312 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
313 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
314 TestStartStop/group/embed-certs/serial/DeployApp 9.38
315 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
318 TestStartStop/group/embed-certs/serial/Stop 12.11
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.83
321 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
322 TestStartStop/group/embed-certs/serial/SecondStart 63.67
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.42
329 TestStartStop/group/newest-cni/serial/FirstStart 43.83
331 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.29
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 65.05
334 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/Stop 1.41
337 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
338 TestStartStop/group/newest-cni/serial/SecondStart 16.64
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
343 TestNetworkPlugins/group/auto/Start 85.44
344 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.15
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
348 TestNetworkPlugins/group/kindnet/Start 78.86
349 TestNetworkPlugins/group/auto/KubeletFlags 0.31
350 TestNetworkPlugins/group/auto/NetCatPod 9.33
351 TestNetworkPlugins/group/auto/DNS 0.16
352 TestNetworkPlugins/group/auto/Localhost 0.15
353 TestNetworkPlugins/group/auto/HairPin 0.14
354 TestNetworkPlugins/group/kindnet/ControllerPod 6
355 TestNetworkPlugins/group/calico/Start 67.78
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.47
357 TestNetworkPlugins/group/kindnet/NetCatPod 11.43
358 TestNetworkPlugins/group/kindnet/DNS 0.18
359 TestNetworkPlugins/group/kindnet/Localhost 0.19
360 TestNetworkPlugins/group/kindnet/HairPin 0.17
361 TestNetworkPlugins/group/custom-flannel/Start 64.26
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.33
364 TestNetworkPlugins/group/calico/NetCatPod 12.37
365 TestNetworkPlugins/group/calico/DNS 0.17
366 TestNetworkPlugins/group/calico/Localhost 0.17
367 TestNetworkPlugins/group/calico/HairPin 0.24
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.45
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.44
370 TestNetworkPlugins/group/enable-default-cni/Start 81.05
371 TestNetworkPlugins/group/custom-flannel/DNS 0.26
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
374 TestNetworkPlugins/group/flannel/Start 56.69
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.29
377 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
378 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
379 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
380 TestNetworkPlugins/group/flannel/ControllerPod 6
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
382 TestNetworkPlugins/group/flannel/NetCatPod 11.35
383 TestNetworkPlugins/group/flannel/DNS 0.19
384 TestNetworkPlugins/group/flannel/Localhost 0.19
385 TestNetworkPlugins/group/flannel/HairPin 0.17
386 TestNetworkPlugins/group/bridge/Start 77.06
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
388 TestNetworkPlugins/group/bridge/NetCatPod 10.26
389 TestNetworkPlugins/group/bridge/DNS 0.15
390 TestNetworkPlugins/group/bridge/Localhost 0.13
391 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (19.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-926391 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-926391 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (19.611734408s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (19.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1002 20:18:23.556498  993954 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1002 20:18:23.556578  993954 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-926391
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-926391: exit status 85 (98.08348ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-926391 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-926391 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:18:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:18:03.987557  993959 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:18:03.987677  993959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:03.987687  993959 out.go:374] Setting ErrFile to fd 2...
	I1002 20:18:03.987693  993959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:03.987974  993959 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	W1002 20:18:03.988127  993959 root.go:315] Error reading config file at /home/jenkins/minikube-integration/21683-992084/.minikube/config/config.json: open /home/jenkins/minikube-integration/21683-992084/.minikube/config/config.json: no such file or directory
	I1002 20:18:03.988521  993959 out.go:368] Setting JSON to true
	I1002 20:18:03.989376  993959 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18021,"bootTime":1759418263,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 20:18:03.989445  993959 start.go:140] virtualization:  
	I1002 20:18:03.993467  993959 out.go:99] [download-only-926391] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1002 20:18:03.993675  993959 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 20:18:03.993795  993959 notify.go:221] Checking for updates...
	I1002 20:18:03.996736  993959 out.go:171] MINIKUBE_LOCATION=21683
	I1002 20:18:03.999656  993959 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:18:04.004187  993959 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:18:04.007615  993959 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 20:18:04.010542  993959 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 20:18:04.016455  993959 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 20:18:04.016825  993959 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:18:04.044646  993959 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:18:04.044767  993959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:04.107544  993959 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-02 20:18:04.097356746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:18:04.107646  993959 docker.go:319] overlay module found
	I1002 20:18:04.110747  993959 out.go:99] Using the docker driver based on user configuration
	I1002 20:18:04.110804  993959 start.go:306] selected driver: docker
	I1002 20:18:04.110828  993959 start.go:936] validating driver "docker" against <nil>
	I1002 20:18:04.110927  993959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:04.163481  993959 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-02 20:18:04.15458504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:18:04.163644  993959 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:18:04.163929  993959 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 20:18:04.164091  993959 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 20:18:04.167210  993959 out.go:171] Using Docker driver with root privileges
	I1002 20:18:04.170238  993959 cni.go:84] Creating CNI manager for ""
	I1002 20:18:04.170315  993959 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:18:04.170333  993959 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:18:04.170416  993959 start.go:350] cluster config:
	{Name:download-only-926391 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-926391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:04.173366  993959 out.go:99] Starting "download-only-926391" primary control-plane node in "download-only-926391" cluster
	I1002 20:18:04.173388  993959 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:18:04.176250  993959 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:18:04.176290  993959 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 20:18:04.176481  993959 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:18:04.193659  993959 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:18:04.193862  993959 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 20:18:04.193972  993959 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:18:04.239676  993959 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1002 20:18:04.239703  993959 cache.go:59] Caching tarball of preloaded images
	I1002 20:18:04.239844  993959 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 20:18:04.243257  993959 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1002 20:18:04.243293  993959 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1002 20:18:04.336268  993959 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1002 20:18:04.336422  993959 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1002 20:18:09.369628  993959 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	
	
	* The control-plane node download-only-926391 host does not exist
	  To start a cluster, run: "minikube start -p download-only-926391"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-926391
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (16.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-569491 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-569491 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (16.6255931s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (16.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1002 20:18:40.625506  993954 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 20:18:40.625541  993954 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-569491
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-569491: exit status 85 (86.284383ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-926391 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-926391 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ delete  │ -p download-only-926391                                                                                                                                                   │ download-only-926391 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ start   │ -o=json --download-only -p download-only-569491 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-569491 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:18:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:18:24.048121  994157 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:18:24.048299  994157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:24.048309  994157 out.go:374] Setting ErrFile to fd 2...
	I1002 20:18:24.048315  994157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:24.048557  994157 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:18:24.048995  994157 out.go:368] Setting JSON to true
	I1002 20:18:24.049803  994157 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18041,"bootTime":1759418263,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 20:18:24.049873  994157 start.go:140] virtualization:  
	I1002 20:18:24.053081  994157 out.go:99] [download-only-569491] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:18:24.053303  994157 notify.go:221] Checking for updates...
	I1002 20:18:24.056261  994157 out.go:171] MINIKUBE_LOCATION=21683
	I1002 20:18:24.059432  994157 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:18:24.062438  994157 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:18:24.065339  994157 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 20:18:24.068249  994157 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 20:18:24.073924  994157 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 20:18:24.074237  994157 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:18:24.101044  994157 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:18:24.101149  994157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:24.157124  994157 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 20:18:24.147804746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:18:24.157230  994157 docker.go:319] overlay module found
	I1002 20:18:24.160204  994157 out.go:99] Using the docker driver based on user configuration
	I1002 20:18:24.160256  994157 start.go:306] selected driver: docker
	I1002 20:18:24.160266  994157 start.go:936] validating driver "docker" against <nil>
	I1002 20:18:24.160369  994157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:24.212759  994157 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 20:18:24.203436963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:18:24.212919  994157 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:18:24.213184  994157 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 20:18:24.213345  994157 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 20:18:24.216498  994157 out.go:171] Using Docker driver with root privileges
	I1002 20:18:24.219419  994157 cni.go:84] Creating CNI manager for ""
	I1002 20:18:24.219489  994157 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:18:24.219504  994157 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:18:24.219597  994157 start.go:350] cluster config:
	{Name:download-only-569491 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-569491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:24.222738  994157 out.go:99] Starting "download-only-569491" primary control-plane node in "download-only-569491" cluster
	I1002 20:18:24.222759  994157 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:18:24.225686  994157 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:18:24.225727  994157 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:24.225903  994157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:18:24.242115  994157 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:18:24.242242  994157 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 20:18:24.242265  994157 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 20:18:24.242275  994157 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 20:18:24.242282  994157 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 20:18:24.278082  994157 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 20:18:24.278106  994157 cache.go:59] Caching tarball of preloaded images
	I1002 20:18:24.278913  994157 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:24.281972  994157 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1002 20:18:24.282007  994157 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1002 20:18:24.373155  994157 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1002 20:18:24.373211  994157 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21683-992084/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-569491 host does not exist
	  To start a cluster, run: "minikube start -p download-only-569491"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-569491
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.69s)

                                                
                                                
=== RUN   TestBinaryMirror
I1002 20:18:41.776680  993954 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-261948 --alsologtostderr --binary-mirror http://127.0.0.1:38235 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-261948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-261948
--- PASS: TestBinaryMirror (0.69s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-693704
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-693704: exit status 85 (77.72832ms)

                                                
                                                
-- stdout --
	* Profile "addons-693704" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-693704"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-693704
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-693704: exit status 85 (65.010346ms)

                                                
                                                
-- stdout --
	* Profile "addons-693704" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-693704"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (171.01s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-693704 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-693704 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m51.010587427s)
--- PASS: TestAddons/Setup (171.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-693704 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-693704 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-693704 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-693704 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f11fc75b-21bb-4771-a94c-c3c828031e4e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f11fc75b-21bb-4771-a94c-c3c828031e4e] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.00385685s
addons_test.go:694: (dbg) Run:  kubectl --context addons-693704 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-693704 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-693704 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-693704 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-693704
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-693704: (12.10094102s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-693704
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-693704
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-693704
--- PASS: TestAddons/StoppedEnableDisable (12.38s)

                                                
                                    
x
+
TestCertOptions (40.57s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-769461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-769461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (37.845986026s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-769461 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-769461 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-769461 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-769461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-769461
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-769461: (1.970717843s)
--- PASS: TestCertOptions (40.57s)

                                                
                                    
x
+
TestCertExpiration (234.35s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-955864 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-955864 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (35.370513586s)
E1002 21:49:08.818781  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:49:25.746575  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-955864 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-955864 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (16.408576122s)
helpers_test.go:175: Cleaning up "cert-expiration-955864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-955864
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-955864: (2.572039838s)
--- PASS: TestCertExpiration (234.35s)

                                                
                                    
x
+
TestErrorSpam/setup (33.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-254589 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-254589 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-254589 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-254589 --driver=docker  --container-runtime=crio: (33.434806155s)
--- PASS: TestErrorSpam/setup (33.43s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (6.42s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 pause: exit status 80 (2.031398553s)

                                                
                                                
-- stdout --
	* Pausing node nospam-254589 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:36:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_3.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 pause: exit status 80 (2.024882504s)

                                                
                                                
-- stdout --
	* Pausing node nospam-254589 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:36:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_3.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 pause: exit status 80 (2.363357507s)

                                                
                                                
-- stdout --
	* Pausing node nospam-254589 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:36:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_3.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.42s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.01s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 unpause
E1002 20:36:34.540061  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:36:34.546426  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:36:34.557893  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:36:34.579257  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:36:34.620583  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:36:34.701946  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:36:34.863530  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:36:35.185197  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 unpause: exit status 80 (2.29873375s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-254589 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:36:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_3.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 unpause
E1002 20:36:35.827478  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:36:37.109199  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 unpause: exit status 80 (1.85238343s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-254589 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:36:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_3.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 unpause: exit status 80 (1.854807966s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-254589 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T20:36:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_3.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.01s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 stop
E1002 20:36:39.671334  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 stop: (1.232865594s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-254589 --log_dir /tmp/nospam-254589 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21683-992084/.minikube/files/etc/test/nested/copy/993954/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.79s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-850296 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1002 20:36:55.034957  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:37:15.516285  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:37:56.477696  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-850296 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m18.794717741s)
--- PASS: TestFunctional/serial/StartWithProxy (78.79s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.09s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 20:38:04.811992  993954 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-850296 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-850296 --alsologtostderr -v=8: (30.093739818s)
functional_test.go:678: soft start took 30.094254595s for "functional-850296" cluster.
I1002 20:38:34.906016  993954 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (30.09s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-850296 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-850296 cache add registry.k8s.io/pause:3.1: (1.183805344s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-850296 cache add registry.k8s.io/pause:3.3: (1.130664374s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-850296 cache add registry.k8s.io/pause:latest: (1.090848172s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-850296 /tmp/TestFunctionalserialCacheCmdcacheadd_local1999053199/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 cache add minikube-local-cache-test:functional-850296
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 cache delete minikube-local-cache-test:functional-850296
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-850296
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-850296 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (292.456018ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 kubectl -- --context functional-850296 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-850296 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.22s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-850296 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-850296 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.405792732s)
functional_test.go:776: restart took 34.405886235s for "functional-850296" cluster.
I1002 20:39:16.701063  993954 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (34.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-850296 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-850296 logs: (1.506987677s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 logs --file /tmp/TestFunctionalserialLogsFileCmd3566668330/001/logs.txt
E1002 20:39:18.399711  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-850296 logs --file /tmp/TestFunctionalserialLogsFileCmd3566668330/001/logs.txt: (1.445867907s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.2s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-850296 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-850296
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-850296: exit status 115 (375.88495ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32498 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-850296 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-850296 config get cpus: exit status 14 (88.755107ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-850296 config get cpus: exit status 14 (71.910265ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-850296 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-850296 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (198.34706ms)

                                                
                                                
-- stdout --
	* [functional-850296] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:54:09.002245 1024214 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:54:09.002443 1024214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:54:09.002449 1024214 out.go:374] Setting ErrFile to fd 2...
	I1002 20:54:09.002454 1024214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:54:09.002794 1024214 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:54:09.003250 1024214 out.go:368] Setting JSON to false
	I1002 20:54:09.004643 1024214 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20186,"bootTime":1759418263,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 20:54:09.004744 1024214 start.go:140] virtualization:  
	I1002 20:54:09.008491 1024214 out.go:179] * [functional-850296] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 20:54:09.011753 1024214 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:54:09.011861 1024214 notify.go:221] Checking for updates...
	I1002 20:54:09.017916 1024214 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:54:09.021077 1024214 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:54:09.023993 1024214 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 20:54:09.026879 1024214 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:54:09.029704 1024214 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:54:09.033026 1024214 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:54:09.033603 1024214 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:54:09.068413 1024214 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:54:09.068590 1024214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:54:09.128257 1024214 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:54:09.118546671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:54:09.128372 1024214 docker.go:319] overlay module found
	I1002 20:54:09.131465 1024214 out.go:179] * Using the docker driver based on existing profile
	I1002 20:54:09.134390 1024214 start.go:306] selected driver: docker
	I1002 20:54:09.134413 1024214 start.go:936] validating driver "docker" against &{Name:functional-850296 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-850296 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:09.134514 1024214 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:54:09.137943 1024214 out.go:203] 
	W1002 20:54:09.140886 1024214 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 20:54:09.143653 1024214 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-850296 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-850296 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-850296 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.285105ms)

                                                
                                                
-- stdout --
	* [functional-850296] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:54:08.804336 1024167 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:54:08.804456 1024167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:54:08.804471 1024167 out.go:374] Setting ErrFile to fd 2...
	I1002 20:54:08.804476 1024167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:54:08.804836 1024167 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 20:54:08.805219 1024167 out.go:368] Setting JSON to false
	I1002 20:54:08.806098 1024167 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20186,"bootTime":1759418263,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 20:54:08.806167 1024167 start.go:140] virtualization:  
	I1002 20:54:08.810141 1024167 out.go:179] * [functional-850296] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1002 20:54:08.813182 1024167 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:54:08.813249 1024167 notify.go:221] Checking for updates...
	I1002 20:54:08.819797 1024167 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:54:08.822616 1024167 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 20:54:08.825484 1024167 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 20:54:08.828367 1024167 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 20:54:08.831377 1024167 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:54:08.836256 1024167 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:54:08.838648 1024167 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:54:08.867584 1024167 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 20:54:08.867708 1024167 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:54:08.929450 1024167 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 20:54:08.919697421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 20:54:08.929565 1024167 docker.go:319] overlay module found
	I1002 20:54:08.932790 1024167 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 20:54:08.935685 1024167 start.go:306] selected driver: docker
	I1002 20:54:08.935709 1024167 start.go:936] validating driver "docker" against &{Name:functional-850296 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-850296 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:08.935819 1024167 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:54:08.939334 1024167 out.go:203] 
	W1002 20:54:08.942172 1024167 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 20:54:08.944991 1024167 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh -n functional-850296 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 cp functional-850296:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd631086801/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh -n functional-850296 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh -n functional-850296 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/993954/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "sudo cat /etc/test/nested/copy/993954/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/993954.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "sudo cat /etc/ssl/certs/993954.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/993954.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "sudo cat /usr/share/ca-certificates/993954.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/9939542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "sudo cat /etc/ssl/certs/9939542.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/9939542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "sudo cat /usr/share/ca-certificates/9939542.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-850296 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-850296 ssh "sudo systemctl is-active docker": exit status 1 (280.197769ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-850296 ssh "sudo systemctl is-active containerd": exit status 1 (278.438921ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-850296 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-850296 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-850296 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1019894: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-850296 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-850296 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-850296 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "366.798138ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "54.178145ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "359.207086ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "61.421267ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-850296 /tmp/TestFunctionalparallelMountCmdany-port2051604795/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759438420674874808" to /tmp/TestFunctionalparallelMountCmdany-port2051604795/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759438420674874808" to /tmp/TestFunctionalparallelMountCmdany-port2051604795/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759438420674874808" to /tmp/TestFunctionalparallelMountCmdany-port2051604795/001/test-1759438420674874808
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-850296 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (374.476719ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 20:53:41.049669  993954 retry.go:31] will retry after 587.489111ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 20:53 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 20:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 20:53 test-1759438420674874808
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh cat /mount-9p/test-1759438420674874808
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-850296 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [fae130f1-436b-4f60-aa57-6c8548fa2898] Pending
helpers_test.go:352: "busybox-mount" [fae130f1-436b-4f60-aa57-6c8548fa2898] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [fae130f1-436b-4f60-aa57-6c8548fa2898] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [fae130f1-436b-4f60-aa57-6c8548fa2898] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 21.003507932s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-850296 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-850296 /tmp/TestFunctionalparallelMountCmdany-port2051604795/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (24.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-850296 /tmp/TestFunctionalparallelMountCmdspecific-port1159113948/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-850296 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (353.945312ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 20:54:05.026166  993954 retry.go:31] will retry after 444.239236ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-850296 /tmp/TestFunctionalparallelMountCmdspecific-port1159113948/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-850296 ssh "sudo umount -f /mount-9p": exit status 1 (271.27653ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-850296 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-850296 /tmp/TestFunctionalparallelMountCmdspecific-port1159113948/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-850296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2333129177/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-850296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2333129177/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-850296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2333129177/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-850296 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-850296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2333129177/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-850296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2333129177/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-850296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2333129177/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-850296 service list: (1.317924356s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-850296 service list -o json: (1.313495128s)
functional_test.go:1504: Took "1.313584906s" to run "out/minikube-linux-arm64 -p functional-850296 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-850296 version -o=json --components: (1.072958007s)
--- PASS: TestFunctional/parallel/Version/components (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-850296 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-850296 image ls --format short --alsologtostderr:
I1002 20:55:16.959035 1026703 out.go:360] Setting OutFile to fd 1 ...
I1002 20:55:16.959238 1026703 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:55:16.959266 1026703 out.go:374] Setting ErrFile to fd 2...
I1002 20:55:16.959287 1026703 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:55:16.959582 1026703 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
I1002 20:55:16.960286 1026703 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:55:16.960459 1026703 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:55:16.960964 1026703 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
I1002 20:55:16.980818 1026703 ssh_runner.go:195] Run: systemctl --version
I1002 20:55:16.980868 1026703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
I1002 20:55:17.005143 1026703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
I1002 20:55:17.100640 1026703 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-850296 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ localhost/my-image                      │ functional-850296  │ 135a2c9ec859b │ 1.64MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-850296 image ls --format table --alsologtostderr:
I1002 20:55:21.601922 1027208 out.go:360] Setting OutFile to fd 1 ...
I1002 20:55:21.602192 1027208 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:55:21.602218 1027208 out.go:374] Setting ErrFile to fd 2...
I1002 20:55:21.602236 1027208 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:55:21.602549 1027208 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
I1002 20:55:21.603428 1027208 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:55:21.603607 1027208 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:55:21.604170 1027208 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
I1002 20:55:21.623200 1027208 ssh_runner.go:195] Run: systemctl --version
I1002 20:55:21.623263 1027208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
I1002 20:55:21.645167 1027208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
I1002 20:55:21.740994 1027208 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-850296 image ls --format json --alsologtostderr:
[{"id":"5b0ac86d7af466a9878d80e67937ce8662bb6161841a810f9610f363bc2548d1","repoDigests":["docker.io/library/edb77f7faf566507ed137a5c5e049081dc465fb4f7bc0e645a9ba2f52f13615b-tmp@sha256:4f0e91e208de3bdebad7561f6e1830d88b8613cc2cb3947c3640edf417c0192a"],"repoTags":[],"size":"1638179"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"8057e0500773a37cde2cff0
41eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.i
o/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDige
sts":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"
],"size":"29037500"},{"id":"135a2c9ec859be926f6f8663641ec9e5d0f656392880ffbef14109a32ccf0abe","repoDigests":["localhost/my-image@sha256:2ee21f649c74b32be505ee93923dc96fe0cfd02efb010640fd85476703d394d0"],"repoTags":["localhost/my-image:functional-850296"],"size":"1640791"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"8cb2091f603e75187e2f6226c59
01d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-850296 image ls --format json --alsologtostderr:
I1002 20:55:21.356157 1027173 out.go:360] Setting OutFile to fd 1 ...
I1002 20:55:21.356277 1027173 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:55:21.356290 1027173 out.go:374] Setting ErrFile to fd 2...
I1002 20:55:21.356296 1027173 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:55:21.356560 1027173 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
I1002 20:55:21.357580 1027173 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:55:21.357727 1027173 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:55:21.358335 1027173 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
I1002 20:55:21.376409 1027173 ssh_runner.go:195] Run: systemctl --version
I1002 20:55:21.376468 1027173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
I1002 20:55:21.394624 1027173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
I1002 20:55:21.497458 1027173 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-850296 image ls --format yaml --alsologtostderr:
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 135a2c9ec859be926f6f8663641ec9e5d0f656392880ffbef14109a32ccf0abe
repoDigests:
- localhost/my-image@sha256:2ee21f649c74b32be505ee93923dc96fe0cfd02efb010640fd85476703d394d0
repoTags:
- localhost/my-image:functional-850296
size: "1640791"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 5b0ac86d7af466a9878d80e67937ce8662bb6161841a810f9610f363bc2548d1
repoDigests:
- docker.io/library/edb77f7faf566507ed137a5c5e049081dc465fb4f7bc0e645a9ba2f52f13615b-tmp@sha256:4f0e91e208de3bdebad7561f6e1830d88b8613cc2cb3947c3640edf417c0192a
repoTags: []
size: "1638179"
- id: 71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1634527"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-850296 image ls --format yaml --alsologtostderr:
I1002 20:55:21.126766 1027139 out.go:360] Setting OutFile to fd 1 ...
I1002 20:55:21.126943 1027139 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:55:21.126953 1027139 out.go:374] Setting ErrFile to fd 2...
I1002 20:55:21.126959 1027139 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:55:21.127324 1027139 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
I1002 20:55:21.128293 1027139 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:55:21.128428 1027139 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:55:21.128904 1027139 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
I1002 20:55:21.147209 1027139 ssh_runner.go:195] Run: systemctl --version
I1002 20:55:21.147312 1027139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
I1002 20:55:21.165436 1027139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
I1002 20:55:21.264817 1027139 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-850296 ssh pgrep buildkitd: exit status 1 (261.547061ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image build -t localhost/my-image:functional-850296 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-850296 image build -t localhost/my-image:functional-850296 testdata/build --alsologtostderr: (3.287995454s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-850296 image build -t localhost/my-image:functional-850296 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5b0ac86d7af
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-850296
--> 135a2c9ec85
Successfully tagged localhost/my-image:functional-850296
135a2c9ec859be926f6f8663641ec9e5d0f656392880ffbef14109a32ccf0abe
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-850296 image build -t localhost/my-image:functional-850296 testdata/build --alsologtostderr:
I1002 20:55:17.613560 1026842 out.go:360] Setting OutFile to fd 1 ...
I1002 20:55:17.614410 1026842 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:55:17.614451 1026842 out.go:374] Setting ErrFile to fd 2...
I1002 20:55:17.614471 1026842 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:55:17.614756 1026842 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
I1002 20:55:17.615464 1026842 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:55:17.616250 1026842 config.go:182] Loaded profile config "functional-850296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:55:17.616763 1026842 cli_runner.go:164] Run: docker container inspect functional-850296 --format={{.State.Status}}
I1002 20:55:17.635451 1026842 ssh_runner.go:195] Run: systemctl --version
I1002 20:55:17.635508 1026842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-850296
I1002 20:55:17.655184 1026842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/functional-850296/id_rsa Username:docker}
I1002 20:55:17.748409 1026842 build_images.go:161] Building image from path: /tmp/build.1124565548.tar
I1002 20:55:17.748478 1026842 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 20:55:17.756418 1026842 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1124565548.tar
I1002 20:55:17.759996 1026842 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1124565548.tar: stat -c "%s %y" /var/lib/minikube/build/build.1124565548.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1124565548.tar': No such file or directory
I1002 20:55:17.760028 1026842 ssh_runner.go:362] scp /tmp/build.1124565548.tar --> /var/lib/minikube/build/build.1124565548.tar (3072 bytes)
I1002 20:55:17.778636 1026842 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1124565548
I1002 20:55:17.786704 1026842 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1124565548 -xf /var/lib/minikube/build/build.1124565548.tar
I1002 20:55:17.795320 1026842 crio.go:315] Building image: /var/lib/minikube/build/build.1124565548
I1002 20:55:17.795408 1026842 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-850296 /var/lib/minikube/build/build.1124565548 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1002 20:55:20.822121 1026842 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-850296 /var/lib/minikube/build/build.1124565548 --cgroup-manager=cgroupfs: (3.026687402s)
I1002 20:55:20.822210 1026842 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1124565548
I1002 20:55:20.830075 1026842 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1124565548.tar
I1002 20:55:20.838964 1026842 build_images.go:217] Built localhost/my-image:functional-850296 from /tmp/build.1124565548.tar
I1002 20:55:20.838993 1026842 build_images.go:133] succeeded building to: functional-850296
I1002 20:55:20.838999 1026842 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-850296
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image rm kicbase/echo-server:functional-850296 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 update-context --alsologtostderr -v=2
E1002 20:56:34.542192  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-850296 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-850296
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-850296
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-850296
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (216.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1002 20:59:25.746535  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:59:25.753324  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:59:25.764703  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:59:25.786175  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:59:25.828339  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:59:25.909888  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:59:26.071667  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:59:26.393750  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:59:27.035773  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:59:28.317341  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:59:30.878617  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:59:36.004784  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:59:46.246963  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:00:06.728848  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:00:47.690176  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:01:34.539921  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:02:09.612140  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-759816 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m35.851190176s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (216.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-759816 kubectl -- rollout status deployment/busybox: (3.81716247s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- exec busybox-7b57f96db7-2d6ll -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- exec busybox-7b57f96db7-pzfdz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- exec busybox-7b57f96db7-zcbc5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- exec busybox-7b57f96db7-2d6ll -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- exec busybox-7b57f96db7-pzfdz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- exec busybox-7b57f96db7-zcbc5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- exec busybox-7b57f96db7-2d6ll -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- exec busybox-7b57f96db7-pzfdz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- exec busybox-7b57f96db7-zcbc5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- exec busybox-7b57f96db7-2d6ll -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- exec busybox-7b57f96db7-2d6ll -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- exec busybox-7b57f96db7-pzfdz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- exec busybox-7b57f96db7-pzfdz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- exec busybox-7b57f96db7-zcbc5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 kubectl -- exec busybox-7b57f96db7-zcbc5 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-759816 node add --alsologtostderr -v 5: (1m0.199386321s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-759816 status --alsologtostderr -v 5: (1.439355203s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-759816 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.055563686s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-759816 status --output json --alsologtostderr -v 5: (1.020503278s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp testdata/cp-test.txt ha-759816:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2868375635/001/cp-test_ha-759816.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816:/home/docker/cp-test.txt ha-759816-m02:/home/docker/cp-test_ha-759816_ha-759816-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m02 "sudo cat /home/docker/cp-test_ha-759816_ha-759816-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816:/home/docker/cp-test.txt ha-759816-m03:/home/docker/cp-test_ha-759816_ha-759816-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m03 "sudo cat /home/docker/cp-test_ha-759816_ha-759816-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816:/home/docker/cp-test.txt ha-759816-m04:/home/docker/cp-test_ha-759816_ha-759816-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m04 "sudo cat /home/docker/cp-test_ha-759816_ha-759816-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp testdata/cp-test.txt ha-759816-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2868375635/001/cp-test_ha-759816-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816-m02:/home/docker/cp-test.txt ha-759816:/home/docker/cp-test_ha-759816-m02_ha-759816.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816 "sudo cat /home/docker/cp-test_ha-759816-m02_ha-759816.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816-m02:/home/docker/cp-test.txt ha-759816-m03:/home/docker/cp-test_ha-759816-m02_ha-759816-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m03 "sudo cat /home/docker/cp-test_ha-759816-m02_ha-759816-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816-m02:/home/docker/cp-test.txt ha-759816-m04:/home/docker/cp-test_ha-759816-m02_ha-759816-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m04 "sudo cat /home/docker/cp-test_ha-759816-m02_ha-759816-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp testdata/cp-test.txt ha-759816-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2868375635/001/cp-test_ha-759816-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816-m03:/home/docker/cp-test.txt ha-759816:/home/docker/cp-test_ha-759816-m03_ha-759816.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816 "sudo cat /home/docker/cp-test_ha-759816-m03_ha-759816.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816-m03:/home/docker/cp-test.txt ha-759816-m02:/home/docker/cp-test_ha-759816-m03_ha-759816-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m02 "sudo cat /home/docker/cp-test_ha-759816-m03_ha-759816-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816-m03:/home/docker/cp-test.txt ha-759816-m04:/home/docker/cp-test_ha-759816-m03_ha-759816-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m04 "sudo cat /home/docker/cp-test_ha-759816-m03_ha-759816-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp testdata/cp-test.txt ha-759816-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2868375635/001/cp-test_ha-759816-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816-m04:/home/docker/cp-test.txt ha-759816:/home/docker/cp-test_ha-759816-m04_ha-759816.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816 "sudo cat /home/docker/cp-test_ha-759816-m04_ha-759816.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816-m04:/home/docker/cp-test.txt ha-759816-m02:/home/docker/cp-test_ha-759816-m04_ha-759816-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m02 "sudo cat /home/docker/cp-test_ha-759816-m04_ha-759816-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 cp ha-759816-m04:/home/docker/cp-test.txt ha-759816-m03:/home/docker/cp-test_ha-759816-m04_ha-759816-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 ssh -n ha-759816-m03 "sudo cat /home/docker/cp-test_ha-759816-m04_ha-759816-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 node stop m02 --alsologtostderr -v 5
E1002 21:04:25.747239  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-759816 node stop m02 --alsologtostderr -v 5: (11.908955647s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-759816 status --alsologtostderr -v 5: exit status 7 (749.961841ms)

                                                
                                                
-- stdout --
	ha-759816
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-759816-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-759816-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-759816-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:04:33.050924 1042298 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:04:33.051098 1042298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:04:33.051128 1042298 out.go:374] Setting ErrFile to fd 2...
	I1002 21:04:33.051149 1042298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:04:33.051437 1042298 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:04:33.051662 1042298 out.go:368] Setting JSON to false
	I1002 21:04:33.051763 1042298 mustload.go:65] Loading cluster: ha-759816
	I1002 21:04:33.051840 1042298 notify.go:221] Checking for updates...
	I1002 21:04:33.052221 1042298 config.go:182] Loaded profile config "ha-759816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:04:33.052255 1042298 status.go:174] checking status of ha-759816 ...
	I1002 21:04:33.052821 1042298 cli_runner.go:164] Run: docker container inspect ha-759816 --format={{.State.Status}}
	I1002 21:04:33.076277 1042298 status.go:371] ha-759816 host status = "Running" (err=<nil>)
	I1002 21:04:33.076302 1042298 host.go:66] Checking if "ha-759816" exists ...
	I1002 21:04:33.076600 1042298 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-759816
	I1002 21:04:33.102070 1042298 host.go:66] Checking if "ha-759816" exists ...
	I1002 21:04:33.102391 1042298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:04:33.102448 1042298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-759816
	I1002 21:04:33.125835 1042298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33915 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/ha-759816/id_rsa Username:docker}
	I1002 21:04:33.232512 1042298 ssh_runner.go:195] Run: systemctl --version
	I1002 21:04:33.239138 1042298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:04:33.252893 1042298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:04:33.315227 1042298 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-02 21:04:33.304973871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:04:33.315754 1042298 kubeconfig.go:125] found "ha-759816" server: "https://192.168.49.254:8443"
	I1002 21:04:33.315788 1042298 api_server.go:166] Checking apiserver status ...
	I1002 21:04:33.315834 1042298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:04:33.328049 1042298 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup
	I1002 21:04:33.336984 1042298 api_server.go:182] apiserver freezer: "5:freezer:/docker/f9b0ccadc3fef451ca77918f7300bb51b5905cbb6cea5a753202e7949f647218/crio/crio-fabc7b6e7efe2a86210179e2a573e023bd05280288fb48095b680b0d40cc37d1"
	I1002 21:04:33.337069 1042298 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f9b0ccadc3fef451ca77918f7300bb51b5905cbb6cea5a753202e7949f647218/crio/crio-fabc7b6e7efe2a86210179e2a573e023bd05280288fb48095b680b0d40cc37d1/freezer.state
	I1002 21:04:33.344882 1042298 api_server.go:204] freezer state: "THAWED"
	I1002 21:04:33.344911 1042298 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 21:04:33.354655 1042298 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 21:04:33.354683 1042298 status.go:463] ha-759816 apiserver status = Running (err=<nil>)
	I1002 21:04:33.354694 1042298 status.go:176] ha-759816 status: &{Name:ha-759816 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:04:33.354711 1042298 status.go:174] checking status of ha-759816-m02 ...
	I1002 21:04:33.355021 1042298 cli_runner.go:164] Run: docker container inspect ha-759816-m02 --format={{.State.Status}}
	I1002 21:04:33.372377 1042298 status.go:371] ha-759816-m02 host status = "Stopped" (err=<nil>)
	I1002 21:04:33.372402 1042298 status.go:384] host is not running, skipping remaining checks
	I1002 21:04:33.372409 1042298 status.go:176] ha-759816-m02 status: &{Name:ha-759816-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:04:33.372434 1042298 status.go:174] checking status of ha-759816-m03 ...
	I1002 21:04:33.372751 1042298 cli_runner.go:164] Run: docker container inspect ha-759816-m03 --format={{.State.Status}}
	I1002 21:04:33.389112 1042298 status.go:371] ha-759816-m03 host status = "Running" (err=<nil>)
	I1002 21:04:33.389138 1042298 host.go:66] Checking if "ha-759816-m03" exists ...
	I1002 21:04:33.389420 1042298 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-759816-m03
	I1002 21:04:33.406567 1042298 host.go:66] Checking if "ha-759816-m03" exists ...
	I1002 21:04:33.406886 1042298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:04:33.406931 1042298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-759816-m03
	I1002 21:04:33.425810 1042298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33925 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/ha-759816-m03/id_rsa Username:docker}
	I1002 21:04:33.523283 1042298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:04:33.536672 1042298 kubeconfig.go:125] found "ha-759816" server: "https://192.168.49.254:8443"
	I1002 21:04:33.536701 1042298 api_server.go:166] Checking apiserver status ...
	I1002 21:04:33.536742 1042298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:04:33.547253 1042298 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1172/cgroup
	I1002 21:04:33.555648 1042298 api_server.go:182] apiserver freezer: "5:freezer:/docker/d357dc26486c2fc6c3ce391817212f5e16a23e9020a539e96bce8da59c1b8d8d/crio/crio-fb346cef3757421a07f67399173fccb6d98d6e143e9f9574c9e9dee7e629de91"
	I1002 21:04:33.555722 1042298 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d357dc26486c2fc6c3ce391817212f5e16a23e9020a539e96bce8da59c1b8d8d/crio/crio-fb346cef3757421a07f67399173fccb6d98d6e143e9f9574c9e9dee7e629de91/freezer.state
	I1002 21:04:33.564200 1042298 api_server.go:204] freezer state: "THAWED"
	I1002 21:04:33.564228 1042298 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 21:04:33.574252 1042298 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 21:04:33.574281 1042298 status.go:463] ha-759816-m03 apiserver status = Running (err=<nil>)
	I1002 21:04:33.574290 1042298 status.go:176] ha-759816-m03 status: &{Name:ha-759816-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:04:33.574315 1042298 status.go:174] checking status of ha-759816-m04 ...
	I1002 21:04:33.574650 1042298 cli_runner.go:164] Run: docker container inspect ha-759816-m04 --format={{.State.Status}}
	I1002 21:04:33.593883 1042298 status.go:371] ha-759816-m04 host status = "Running" (err=<nil>)
	I1002 21:04:33.593909 1042298 host.go:66] Checking if "ha-759816-m04" exists ...
	I1002 21:04:33.594273 1042298 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-759816-m04
	I1002 21:04:33.611960 1042298 host.go:66] Checking if "ha-759816-m04" exists ...
	I1002 21:04:33.612270 1042298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:04:33.612313 1042298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-759816-m04
	I1002 21:04:33.629902 1042298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/ha-759816-m04/id_rsa Username:docker}
	I1002 21:04:33.730974 1042298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:04:33.745331 1042298 status.go:176] ha-759816-m04 status: &{Name:ha-759816-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 node start m02 --alsologtostderr -v 5
E1002 21:04:53.454469  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-759816 node start m02 --alsologtostderr -v 5: (19.554143697s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-759816 status --alsologtostderr -v 5: (1.181243261s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.054308547s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-759816 stop --alsologtostderr -v 5: (36.668139057s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 start --wait true --alsologtostderr -v 5
E1002 21:06:34.540337  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-759816 start --wait true --alsologtostderr -v 5: (1m34.113559104s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-759816 node delete m03 --alsologtostderr -v 5: (10.602467698s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-759816 stop --alsologtostderr -v 5: (35.478453538s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-759816 status --alsologtostderr -v 5: exit status 7 (115.754926ms)

                                                
                                                
-- stdout --
	ha-759816
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-759816-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-759816-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:07:55.271513 1054319 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:07:55.271647 1054319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:55.271658 1054319 out.go:374] Setting ErrFile to fd 2...
	I1002 21:07:55.271664 1054319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:55.271901 1054319 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:07:55.272087 1054319 out.go:368] Setting JSON to false
	I1002 21:07:55.272117 1054319 mustload.go:65] Loading cluster: ha-759816
	I1002 21:07:55.272504 1054319 config.go:182] Loaded profile config "ha-759816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:07:55.272522 1054319 status.go:174] checking status of ha-759816 ...
	I1002 21:07:55.273024 1054319 cli_runner.go:164] Run: docker container inspect ha-759816 --format={{.State.Status}}
	I1002 21:07:55.273233 1054319 notify.go:221] Checking for updates...
	I1002 21:07:55.295758 1054319 status.go:371] ha-759816 host status = "Stopped" (err=<nil>)
	I1002 21:07:55.295779 1054319 status.go:384] host is not running, skipping remaining checks
	I1002 21:07:55.295786 1054319 status.go:176] ha-759816 status: &{Name:ha-759816 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:07:55.295819 1054319 status.go:174] checking status of ha-759816-m02 ...
	I1002 21:07:55.296126 1054319 cli_runner.go:164] Run: docker container inspect ha-759816-m02 --format={{.State.Status}}
	I1002 21:07:55.323600 1054319 status.go:371] ha-759816-m02 host status = "Stopped" (err=<nil>)
	I1002 21:07:55.323625 1054319 status.go:384] host is not running, skipping remaining checks
	I1002 21:07:55.323632 1054319 status.go:176] ha-759816-m02 status: &{Name:ha-759816-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:07:55.323652 1054319 status.go:174] checking status of ha-759816-m04 ...
	I1002 21:07:55.323954 1054319 cli_runner.go:164] Run: docker container inspect ha-759816-m04 --format={{.State.Status}}
	I1002 21:07:55.340615 1054319 status.go:371] ha-759816-m04 host status = "Stopped" (err=<nil>)
	I1002 21:07:55.340639 1054319 status.go:384] host is not running, skipping remaining checks
	I1002 21:07:55.340658 1054319 status.go:176] ha-759816-m04 status: &{Name:ha-759816-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (75.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-759816 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m14.272392242s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (75.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (93.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 node add --control-plane --alsologtostderr -v 5
E1002 21:09:25.746519  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:09:37.604319  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-759816 node add --control-plane --alsologtostderr -v 5: (1m32.670085963s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-759816 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-759816 status --alsologtostderr -v 5: (1.129957076s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (93.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.142223179s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.14s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.43s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-003875 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1002 21:11:34.540021  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-003875 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m17.423100584s)
--- PASS: TestJSONOutput/start/Command (77.43s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-003875 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-003875 --output=json --user=testUser: (5.722059376s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-251738 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-251738 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (90.057799ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6482f0da-acdf-4c9c-9dde-06444a3095fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-251738] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b4932a37-f22e-4aed-9f54-9471cbd5849b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"e6263cd4-95cc-43cb-b3e0-5ba84420b033","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5cc1b24a-eadb-4f10-90b6-1b12d128679e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig"}}
	{"specversion":"1.0","id":"33f04683-bee0-456f-9d34-23f97a454174","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube"}}
	{"specversion":"1.0","id":"737e8ec1-681b-49ce-a139-d2f40689a014","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6082cbf4-b1ad-41bc-99db-152f04467924","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5fd0f296-c20a-4b88-b0f6-951916e216cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-251738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-251738
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.12s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-076847 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-076847 --network=: (38.914006801s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-076847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-076847
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-076847: (2.179611447s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.12s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.31s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-532306 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-532306 --network=bridge: (35.1349102s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-532306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-532306
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-532306: (2.148963524s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.31s)

                                                
                                    
x
+
TestKicExistingNetwork (36.08s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1002 21:13:45.852991  993954 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 21:13:45.868906  993954 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 21:13:45.869834  993954 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1002 21:13:45.869870  993954 cli_runner.go:164] Run: docker network inspect existing-network
W1002 21:13:45.887058  993954 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1002 21:13:45.887093  993954 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1002 21:13:45.887111  993954 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1002 21:13:45.887230  993954 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 21:13:45.904526  993954 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c06a83b4618b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:65:5b:88:54:7f} reservation:<nil>}
I1002 21:13:45.904868  993954 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40003ca9f0}
I1002 21:13:45.904893  993954 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1002 21:13:45.904946  993954 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1002 21:13:45.963136  993954 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-157260 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-157260 --network=existing-network: (34.002856046s)
helpers_test.go:175: Cleaning up "existing-network-157260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-157260
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-157260: (1.937321863s)
I1002 21:14:21.921455  993954 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.08s)

                                                
                                    
x
+
TestKicCustomSubnet (35.24s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-975138 --subnet=192.168.60.0/24
E1002 21:14:25.746920  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-975138 --subnet=192.168.60.0/24: (33.114227903s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-975138 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-975138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-975138
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-975138: (2.088867598s)
--- PASS: TestKicCustomSubnet (35.24s)

                                                
                                    
x
+
TestKicStaticIP (38.19s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-076395 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-076395 --static-ip=192.168.200.200: (35.934519546s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-076395 ip
helpers_test.go:175: Cleaning up "static-ip-076395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-076395
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-076395: (2.092459434s)
--- PASS: TestKicStaticIP (38.19s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (72.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-280838 --driver=docker  --container-runtime=crio
E1002 21:15:48.815766  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-280838 --driver=docker  --container-runtime=crio: (30.832584393s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-283529 --driver=docker  --container-runtime=crio
E1002 21:16:34.546587  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-283529 --driver=docker  --container-runtime=crio: (36.25305205s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-280838
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-283529
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-283529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-283529
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-283529: (2.005569104s)
helpers_test.go:175: Cleaning up "first-280838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-280838
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-280838: (1.932350848s)
--- PASS: TestMinikubeProfile (72.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-533522 --memory=3072 --mount-string /tmp/TestMountStartserial3010212175/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-533522 --memory=3072 --mount-string /tmp/TestMountStartserial3010212175/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.755311921s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-533522 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-535491 --memory=3072 --mount-string /tmp/TestMountStartserial3010212175/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-535491 --memory=3072 --mount-string /tmp/TestMountStartserial3010212175/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.153893491s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-535491 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-533522 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-533522 --alsologtostderr -v=5: (1.637889967s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-535491 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-535491
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-535491: (1.220787801s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.37s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-535491
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-535491: (7.374113571s)
--- PASS: TestMountStart/serial/RestartStopped (8.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-535491 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (134.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-633145 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1002 21:19:25.747394  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-633145 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m14.065675962s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (134.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633145 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633145 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-633145 -- rollout status deployment/busybox: (3.328233665s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633145 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633145 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633145 -- exec busybox-7b57f96db7-v4jql -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633145 -- exec busybox-7b57f96db7-zm7tq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633145 -- exec busybox-7b57f96db7-v4jql -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633145 -- exec busybox-7b57f96db7-zm7tq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633145 -- exec busybox-7b57f96db7-v4jql -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633145 -- exec busybox-7b57f96db7-zm7tq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.08s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633145 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633145 -- exec busybox-7b57f96db7-v4jql -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633145 -- exec busybox-7b57f96db7-v4jql -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633145 -- exec busybox-7b57f96db7-zm7tq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633145 -- exec busybox-7b57f96db7-zm7tq -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-633145 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-633145 -v=5 --alsologtostderr: (58.135081524s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.83s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-633145 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 cp testdata/cp-test.txt multinode-633145:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 cp multinode-633145:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3162657787/001/cp-test_multinode-633145.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 cp multinode-633145:/home/docker/cp-test.txt multinode-633145-m02:/home/docker/cp-test_multinode-633145_multinode-633145-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145-m02 "sudo cat /home/docker/cp-test_multinode-633145_multinode-633145-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 cp multinode-633145:/home/docker/cp-test.txt multinode-633145-m03:/home/docker/cp-test_multinode-633145_multinode-633145-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145-m03 "sudo cat /home/docker/cp-test_multinode-633145_multinode-633145-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 cp testdata/cp-test.txt multinode-633145-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 cp multinode-633145-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3162657787/001/cp-test_multinode-633145-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 cp multinode-633145-m02:/home/docker/cp-test.txt multinode-633145:/home/docker/cp-test_multinode-633145-m02_multinode-633145.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145 "sudo cat /home/docker/cp-test_multinode-633145-m02_multinode-633145.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 cp multinode-633145-m02:/home/docker/cp-test.txt multinode-633145-m03:/home/docker/cp-test_multinode-633145-m02_multinode-633145-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145-m03 "sudo cat /home/docker/cp-test_multinode-633145-m02_multinode-633145-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 cp testdata/cp-test.txt multinode-633145-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 cp multinode-633145-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3162657787/001/cp-test_multinode-633145-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 cp multinode-633145-m03:/home/docker/cp-test.txt multinode-633145:/home/docker/cp-test_multinode-633145-m03_multinode-633145.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145 "sudo cat /home/docker/cp-test_multinode-633145-m03_multinode-633145.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 cp multinode-633145-m03:/home/docker/cp-test.txt multinode-633145-m02:/home/docker/cp-test_multinode-633145-m03_multinode-633145-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 ssh -n multinode-633145-m02 "sudo cat /home/docker/cp-test_multinode-633145-m03_multinode-633145-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-633145 node stop m03: (1.215253127s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-633145 status: exit status 7 (536.088656ms)

                                                
                                                
-- stdout --
	multinode-633145
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-633145-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-633145-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-633145 status --alsologtostderr: exit status 7 (522.218548ms)

                                                
                                                
-- stdout --
	multinode-633145
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-633145-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-633145-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:20:51.205610 1104751 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:20:51.205751 1104751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:20:51.205764 1104751 out.go:374] Setting ErrFile to fd 2...
	I1002 21:20:51.205769 1104751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:20:51.206397 1104751 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:20:51.206642 1104751 out.go:368] Setting JSON to false
	I1002 21:20:51.206695 1104751 mustload.go:65] Loading cluster: multinode-633145
	I1002 21:20:51.206751 1104751 notify.go:221] Checking for updates...
	I1002 21:20:51.207668 1104751 config.go:182] Loaded profile config "multinode-633145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:20:51.207691 1104751 status.go:174] checking status of multinode-633145 ...
	I1002 21:20:51.208244 1104751 cli_runner.go:164] Run: docker container inspect multinode-633145 --format={{.State.Status}}
	I1002 21:20:51.232701 1104751 status.go:371] multinode-633145 host status = "Running" (err=<nil>)
	I1002 21:20:51.232731 1104751 host.go:66] Checking if "multinode-633145" exists ...
	I1002 21:20:51.233036 1104751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-633145
	I1002 21:20:51.253816 1104751 host.go:66] Checking if "multinode-633145" exists ...
	I1002 21:20:51.254216 1104751 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:20:51.254273 1104751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-633145
	I1002 21:20:51.273134 1104751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34036 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/multinode-633145/id_rsa Username:docker}
	I1002 21:20:51.368856 1104751 ssh_runner.go:195] Run: systemctl --version
	I1002 21:20:51.375607 1104751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:20:51.388443 1104751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:20:51.442277 1104751 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 21:20:51.432495469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:20:51.442837 1104751 kubeconfig.go:125] found "multinode-633145" server: "https://192.168.67.2:8443"
	I1002 21:20:51.442876 1104751 api_server.go:166] Checking apiserver status ...
	I1002 21:20:51.442926 1104751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:20:51.455808 1104751 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1210/cgroup
	I1002 21:20:51.464262 1104751 api_server.go:182] apiserver freezer: "5:freezer:/docker/d3191bf23f7d2b7e39059a96024fc0986315b6e96b06aacde528e71b479f2c6b/crio/crio-b7ca78d2c446ea68b0e0bd2843a220936dd7c3c6ba03d31eeecfe95d9db36308"
	I1002 21:20:51.464343 1104751 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d3191bf23f7d2b7e39059a96024fc0986315b6e96b06aacde528e71b479f2c6b/crio/crio-b7ca78d2c446ea68b0e0bd2843a220936dd7c3c6ba03d31eeecfe95d9db36308/freezer.state
	I1002 21:20:51.471777 1104751 api_server.go:204] freezer state: "THAWED"
	I1002 21:20:51.471803 1104751 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 21:20:51.479837 1104751 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1002 21:20:51.479861 1104751 status.go:463] multinode-633145 apiserver status = Running (err=<nil>)
	I1002 21:20:51.479872 1104751 status.go:176] multinode-633145 status: &{Name:multinode-633145 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:20:51.479889 1104751 status.go:174] checking status of multinode-633145-m02 ...
	I1002 21:20:51.480230 1104751 cli_runner.go:164] Run: docker container inspect multinode-633145-m02 --format={{.State.Status}}
	I1002 21:20:51.497525 1104751 status.go:371] multinode-633145-m02 host status = "Running" (err=<nil>)
	I1002 21:20:51.497553 1104751 host.go:66] Checking if "multinode-633145-m02" exists ...
	I1002 21:20:51.497863 1104751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-633145-m02
	I1002 21:20:51.515537 1104751 host.go:66] Checking if "multinode-633145-m02" exists ...
	I1002 21:20:51.515850 1104751 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:20:51.515894 1104751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-633145-m02
	I1002 21:20:51.539528 1104751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/21683-992084/.minikube/machines/multinode-633145-m02/id_rsa Username:docker}
	I1002 21:20:51.631134 1104751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:20:51.644073 1104751 status.go:176] multinode-633145-m02 status: &{Name:multinode-633145-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:20:51.644111 1104751 status.go:174] checking status of multinode-633145-m03 ...
	I1002 21:20:51.644473 1104751 cli_runner.go:164] Run: docker container inspect multinode-633145-m03 --format={{.State.Status}}
	I1002 21:20:51.661733 1104751 status.go:371] multinode-633145-m03 host status = "Stopped" (err=<nil>)
	I1002 21:20:51.661759 1104751 status.go:384] host is not running, skipping remaining checks
	I1002 21:20:51.661768 1104751 status.go:176] multinode-633145-m03 status: &{Name:multinode-633145-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-633145 node start m03 -v=5 --alsologtostderr: (7.012986057s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-633145
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-633145
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-633145: (24.723785109s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-633145 --wait=true -v=5 --alsologtostderr
E1002 21:21:34.540523  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-633145 --wait=true -v=5 --alsologtostderr: (51.978446297s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-633145
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-633145 node delete m03: (4.699213825s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-633145 stop: (23.722957163s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-633145 status: exit status 7 (114.480459ms)

                                                
                                                
-- stdout --
	multinode-633145
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-633145-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-633145 status --alsologtostderr: exit status 7 (104.188581ms)

                                                
                                                
-- stdout --
	multinode-633145
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-633145-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:22:45.527524 1112422 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:22:45.527679 1112422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:45.527711 1112422 out.go:374] Setting ErrFile to fd 2...
	I1002 21:22:45.527723 1112422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:45.527990 1112422 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:22:45.528211 1112422 out.go:368] Setting JSON to false
	I1002 21:22:45.528256 1112422 mustload.go:65] Loading cluster: multinode-633145
	I1002 21:22:45.528349 1112422 notify.go:221] Checking for updates...
	I1002 21:22:45.528737 1112422 config.go:182] Loaded profile config "multinode-633145": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:22:45.528758 1112422 status.go:174] checking status of multinode-633145 ...
	I1002 21:22:45.529334 1112422 cli_runner.go:164] Run: docker container inspect multinode-633145 --format={{.State.Status}}
	I1002 21:22:45.552690 1112422 status.go:371] multinode-633145 host status = "Stopped" (err=<nil>)
	I1002 21:22:45.552715 1112422 status.go:384] host is not running, skipping remaining checks
	I1002 21:22:45.552722 1112422 status.go:176] multinode-633145 status: &{Name:multinode-633145 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:22:45.552756 1112422 status.go:174] checking status of multinode-633145-m02 ...
	I1002 21:22:45.553055 1112422 cli_runner.go:164] Run: docker container inspect multinode-633145-m02 --format={{.State.Status}}
	I1002 21:22:45.579635 1112422 status.go:371] multinode-633145-m02 host status = "Stopped" (err=<nil>)
	I1002 21:22:45.579662 1112422 status.go:384] host is not running, skipping remaining checks
	I1002 21:22:45.579669 1112422 status.go:176] multinode-633145-m02 status: &{Name:multinode-633145-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-633145 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-633145 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (49.678617337s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633145 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-633145
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-633145-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-633145-m02 --driver=docker  --container-runtime=crio: exit status 14 (100.813751ms)

                                                
                                                
-- stdout --
	* [multinode-633145-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-633145-m02' is duplicated with machine name 'multinode-633145-m02' in profile 'multinode-633145'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-633145-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-633145-m03 --driver=docker  --container-runtime=crio: (38.121774487s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-633145
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-633145: exit status 80 (327.512453ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-633145 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-633145-m03 already exists in multinode-633145-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-633145-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-633145-m03: (1.969189844s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.57s)

                                                
                                    
x
+
TestScheduledStopUnix (110.64s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-382553 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-382553 --memory=3072 --driver=docker  --container-runtime=crio: (34.108194279s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-382553 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-382553 -n scheduled-stop-382553
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-382553 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1002 21:32:18.555228  993954 retry.go:31] will retry after 145.092µs: open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/scheduled-stop-382553/pid: no such file or directory
I1002 21:32:18.556467  993954 retry.go:31] will retry after 84.818µs: open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/scheduled-stop-382553/pid: no such file or directory
I1002 21:32:18.557583  993954 retry.go:31] will retry after 260.759µs: open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/scheduled-stop-382553/pid: no such file or directory
I1002 21:32:18.558685  993954 retry.go:31] will retry after 227.557µs: open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/scheduled-stop-382553/pid: no such file or directory
I1002 21:32:18.559797  993954 retry.go:31] will retry after 680.053µs: open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/scheduled-stop-382553/pid: no such file or directory
I1002 21:32:18.560882  993954 retry.go:31] will retry after 569.704µs: open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/scheduled-stop-382553/pid: no such file or directory
I1002 21:32:18.561960  993954 retry.go:31] will retry after 633.787µs: open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/scheduled-stop-382553/pid: no such file or directory
I1002 21:32:18.563028  993954 retry.go:31] will retry after 1.498315ms: open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/scheduled-stop-382553/pid: no such file or directory
I1002 21:32:18.565184  993954 retry.go:31] will retry after 2.434851ms: open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/scheduled-stop-382553/pid: no such file or directory
I1002 21:32:18.568390  993954 retry.go:31] will retry after 5.215962ms: open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/scheduled-stop-382553/pid: no such file or directory
I1002 21:32:18.574588  993954 retry.go:31] will retry after 7.481366ms: open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/scheduled-stop-382553/pid: no such file or directory
I1002 21:32:18.582751  993954 retry.go:31] will retry after 8.16426ms: open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/scheduled-stop-382553/pid: no such file or directory
I1002 21:32:18.591970  993954 retry.go:31] will retry after 7.413664ms: open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/scheduled-stop-382553/pid: no such file or directory
I1002 21:32:18.600393  993954 retry.go:31] will retry after 28.048594ms: open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/scheduled-stop-382553/pid: no such file or directory
I1002 21:32:18.628674  993954 retry.go:31] will retry after 17.412137ms: open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/scheduled-stop-382553/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-382553 --cancel-scheduled
E1002 21:32:28.817442  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-382553 -n scheduled-stop-382553
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-382553
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-382553 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-382553
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-382553: exit status 7 (68.926899ms)

                                                
                                                
-- stdout --
	scheduled-stop-382553
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-382553 -n scheduled-stop-382553
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-382553 -n scheduled-stop-382553: exit status 7 (67.301099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-382553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-382553
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-382553: (4.998577038s)
--- PASS: TestScheduledStopUnix (110.64s)

                                                
                                    
x
+
TestInsufficientStorage (14.3s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-776115 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-776115 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.810736829s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fa526414-a561-4afc-bf34-a0a326ca27a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-776115] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"165f9d10-e4a3-4306-a9d9-b6fb8401ae7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"a06c27a1-9dbe-4b6a-8ed1-ae0ddcc288d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"aabcd6ac-7f46-4751-8b95-cbd4a654051b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig"}}
	{"specversion":"1.0","id":"1d3ecbb4-e768-4293-98b1-4e4cafdd68fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube"}}
	{"specversion":"1.0","id":"4a69d16a-9e3c-4841-b853-d265888a41a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ca5f748b-dcae-42ac-ac7f-6eeeccf5eda2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"619962b7-71eb-422c-a51b-6acfabb35161","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8251cbfa-d48e-4c2c-a968-ca915f7ab5df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"49e2d95f-4f2e-435a-96a8-b334e65e7044","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"36ff8f04-abed-4229-aac1-10db047ee974","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"58d35cbc-b10f-4a7d-809f-2ad9d530306d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-776115\" primary control-plane node in \"insufficient-storage-776115\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"faf9bfab-a489-4015-be0f-c3b5c8ba0690","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"097a8fcd-7e0a-4c3f-baca-f11f8478ca15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"382d1c03-5129-4b7b-a025-39d25b01fd55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-776115 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-776115 --output=json --layout=cluster: exit status 7 (283.454799ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-776115","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-776115","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:33:46.649017 1129268 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-776115" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-776115 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-776115 --output=json --layout=cluster: exit status 7 (298.718949ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-776115","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-776115","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:33:46.945829 1129334 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-776115" does not appear in /home/jenkins/minikube-integration/21683-992084/kubeconfig
	E1002 21:33:46.955750 1129334 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/insufficient-storage-776115/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-776115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-776115
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-776115: (1.909630314s)
--- PASS: TestInsufficientStorage (14.30s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (56.72s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.858878107 start -p running-upgrade-497263 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.858878107 start -p running-upgrade-497263 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.258543277s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-497263 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-497263 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.782192271s)
helpers_test.go:175: Cleaning up "running-upgrade-497263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-497263
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-497263: (2.033922983s)
--- PASS: TestRunningBinaryUpgrade (56.72s)

                                                
                                    
x
+
TestKubernetesUpgrade (356.42s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-840583 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-840583 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.958957851s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-840583
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-840583: (1.634269211s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-840583 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-840583 status --format={{.Host}}: exit status 7 (126.445208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-840583 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-840583 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m36.410456956s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-840583 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-840583 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-840583 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (93.742011ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-840583] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-840583
	    minikube start -p kubernetes-upgrade-840583 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8405832 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-840583 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-840583 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-840583 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.059654637s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-840583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-840583
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-840583: (2.023723871s)
--- PASS: TestKubernetesUpgrade (356.42s)

                                                
                                    
x
+
TestMissingContainerUpgrade (115.82s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2548385151 start -p missing-upgrade-192196 --memory=3072 --driver=docker  --container-runtime=crio
E1002 21:34:25.746645  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2548385151 start -p missing-upgrade-192196 --memory=3072 --driver=docker  --container-runtime=crio: (1m1.905202451s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-192196
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-192196
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-192196 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-192196 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.132477927s)
helpers_test.go:175: Cleaning up "missing-upgrade-192196" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-192196
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-192196: (2.16666853s)
--- PASS: TestMissingContainerUpgrade (115.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-222907 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-222907 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (91.747409ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-222907] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-222907 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-222907 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (47.035710238s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-222907 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-222907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-222907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.824236035s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-222907 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-222907 status -o json: exit status 2 (478.595603ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-222907","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-222907
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-222907: (2.116573907s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-222907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-222907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.367805232s)
--- PASS: TestNoKubernetes/serial/Start (9.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-222907 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-222907 "sudo systemctl is-active --quiet service kubelet": exit status 1 (262.683658ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-222907
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-222907: (1.20583796s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-222907 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-222907 --driver=docker  --container-runtime=crio: (6.901685596s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-222907 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-222907 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.906826ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (53.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.359768635 start -p stopped-upgrade-678661 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.359768635 start -p stopped-upgrade-678661 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.769465408s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.359768635 -p stopped-upgrade-678661 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.359768635 -p stopped-upgrade-678661 stop: (1.239328142s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-678661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1002 21:36:34.539598  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-678661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.92852576s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (53.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-678661
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-678661: (1.210385965s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                    
x
+
TestPause/serial/Start (81.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-342805 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-342805 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.866092712s)
--- PASS: TestPause/serial/Start (81.87s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (18.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-342805 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-342805 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.497956698s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (18.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-644857 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-644857 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (190.283485ms)

                                                
                                                
-- stdout --
	* [false-644857] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:41:06.150695 1167081 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:41:06.150925 1167081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:41:06.150959 1167081 out.go:374] Setting ErrFile to fd 2...
	I1002 21:41:06.150979 1167081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:41:06.151262 1167081 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-992084/.minikube/bin
	I1002 21:41:06.151722 1167081 out.go:368] Setting JSON to false
	I1002 21:41:06.152658 1167081 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23004,"bootTime":1759418263,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:41:06.152758 1167081 start.go:140] virtualization:  
	I1002 21:41:06.156323 1167081 out.go:179] * [false-644857] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:41:06.160183 1167081 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:41:06.160344 1167081 notify.go:221] Checking for updates...
	I1002 21:41:06.166073 1167081 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:41:06.169077 1167081 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-992084/kubeconfig
	I1002 21:41:06.172071 1167081 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-992084/.minikube
	I1002 21:41:06.175085 1167081 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:41:06.178190 1167081 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:41:06.181638 1167081 config.go:182] Loaded profile config "force-systemd-flag-987043": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:41:06.181786 1167081 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:41:06.203986 1167081 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:41:06.204110 1167081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:41:06.272556 1167081 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:41:06.262947994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:41:06.272673 1167081 docker.go:319] overlay module found
	I1002 21:41:06.275717 1167081 out.go:179] * Using the docker driver based on user configuration
	I1002 21:41:06.278636 1167081 start.go:306] selected driver: docker
	I1002 21:41:06.278656 1167081 start.go:936] validating driver "docker" against <nil>
	I1002 21:41:06.278676 1167081 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:41:06.282264 1167081 out.go:203] 
	W1002 21:41:06.285191 1167081 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1002 21:41:06.288080 1167081 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-644857 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-644857

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-644857

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-644857

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-644857

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-644857

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-644857

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-644857

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-644857

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-644857

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-644857

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-644857

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-644857" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-644857" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-644857

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644857"

                                                
                                                
----------------------- debugLogs end: false-644857 [took: 3.254620003s] --------------------------------
helpers_test.go:175: Cleaning up "false-644857" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-644857
--- PASS: TestNetworkPlugins/group/false (3.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (64.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m4.832836853s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (64.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-714101 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cd2e4885-9738-419b-a43f-b2503a5228c3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 21:51:34.539685  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [cd2e4885-9738-419b-a43f-b2503a5228c3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005001712s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-714101 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-714101 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-714101 --alsologtostderr -v=3: (11.964853262s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-714101 -n old-k8s-version-714101
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-714101 -n old-k8s-version-714101: exit status 7 (103.824705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-714101 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (56.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-714101 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (55.709862866s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-714101 -n old-k8s-version-714101
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (56.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m10.604258675s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-m6s5z" [f3377233-589f-43c3-8135-33c09c2b7651] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004099336s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-m6s5z" [f3377233-589f-43c3-8135-33c09c2b7651] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003565568s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-714101 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-714101 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-661954 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2f75d586-1180-436a-8778-b22230b1b890] Pending
helpers_test.go:352: "busybox" [2f75d586-1180-436a-8778-b22230b1b890] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2f75d586-1180-436a-8778-b22230b1b890] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003533335s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-661954 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.128719405s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-661954 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-661954 --alsologtostderr -v=3: (12.075753646s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-661954 -n no-preload-661954
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-661954 -n no-preload-661954: exit status 7 (98.529859ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-661954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (60.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1002 21:54:25.746882  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/functional-850296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-661954 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (59.822543435s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-661954 -n no-preload-661954
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (60.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mmbrz" [5828d24d-1b7f-4b37-8eda-0cb1ec554c80] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004109598s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mmbrz" [5828d24d-1b7f-4b37-8eda-0cb1ec554c80] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003244314s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-661954 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-132977 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9f6d78c1-117c-4139-bd07-281c745fef52] Pending
helpers_test.go:352: "busybox" [9f6d78c1-117c-4139-bd07-281c745fef52] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9f6d78c1-117c-4139-bd07-281c745fef52] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004734821s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-132977 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-661954 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-132977 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-132977 --alsologtostderr -v=3: (12.114396463s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m27.82897408s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-132977 -n embed-certs-132977
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-132977 -n embed-certs-132977: exit status 7 (79.867363ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-132977 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (63.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-132977 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m3.258934845s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-132977 -n embed-certs-132977
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (63.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pncmh" [ca1b7c99-a467-43a7-91f9-4bd76f49b14a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003970086s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pncmh" [ca1b7c99-a467-43a7-91f9-4bd76f49b14a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002794062s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-132977 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-132977 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-842185 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [df2c8518-e488-49e5-ad02-f5d32c72a262] Pending
helpers_test.go:352: "busybox" [df2c8518-e488-49e5-ad02-f5d32c72a262] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [df2c8518-e488-49e5-ad02-f5d32c72a262] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003299889s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-842185 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-161621 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1002 21:56:32.997772  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:33.005421  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:33.017146  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:33.038781  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:33.080490  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:33.161868  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:33.323220  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:33.644852  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-161621 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.83163264s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-842185 --alsologtostderr -v=3
E1002 21:56:38.129411  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:56:43.250761  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-842185 --alsologtostderr -v=3: (12.286617352s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842185 -n default-k8s-diff-port-842185
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842185 -n default-k8s-diff-port-842185: exit status 7 (88.016608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-842185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (65.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1002 21:56:53.492866  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:57:13.974424  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-842185 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m4.649019055s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842185 -n default-k8s-diff-port-842185
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (65.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-161621 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-161621 --alsologtostderr -v=3: (1.40720687s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-161621 -n newest-cni-161621
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-161621 -n newest-cni-161621: exit status 7 (115.575983ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-161621 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-161621 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-161621 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (16.237093265s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-161621 -n newest-cni-161621
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-161621 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1002 21:57:54.936632  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m25.438798892s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qfnfg" [0b5b66a2-e15f-4d6a-a076-5eb5b16d10fe] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004468946s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qfnfg" [0b5b66a2-e15f-4d6a-a076-5eb5b16d10fe] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004123209s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-842185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-842185 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1002 21:58:20.969638  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:31.212203  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:58:51.694455  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m18.857416153s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-644857 "pgrep -a kubelet"
I1002 21:59:10.417587  993954 config.go:182] Loaded profile config "auto-644857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-644857 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-k9x8j" [7facfb71-fe33-49d9-b8b8-8498f773945f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-k9x8j" [7facfb71-fe33-49d9-b8b8-8498f773945f] Running
E1002 21:59:16.858485  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003797369s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-644857 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-644857 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-644857 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-4vv8f" [4ea9f338-d014-4d87-83f3-e6b8e23c43ff] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00344736s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m7.779135277s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-644857 "pgrep -a kubelet"
I1002 21:59:45.349088  993954 config.go:182] Loaded profile config "kindnet-644857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-644857 replace --force -f testdata/netcat-deployment.yaml
I1002 21:59:45.735563  993954 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fbm6v" [6b1d7bec-c473-44db-bce4-4843a4ea2e43] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fbm6v" [6b1d7bec-c473-44db-bce4-4843a4ea2e43] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003849731s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-644857 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-644857 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-644857 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m4.25803382s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-p2zhf" [94c6333a-e090-446b-996e-a8349f09edf7] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00430813s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-644857 "pgrep -a kubelet"
E1002 22:00:54.577968  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1002 22:00:54.773851  993954 config.go:182] Loaded profile config "calico-644857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-644857 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-22vhz" [ca2631bc-b283-48a4-b355-251d590d0e93] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-22vhz" [ca2631bc-b283-48a4-b355-251d590d0e93] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003720073s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-644857 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-644857 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-644857 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-644857 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-644857 replace --force -f testdata/netcat-deployment.yaml
E1002 22:01:27.431627  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jxnl5" [882d23ca-0b1e-4f3c-962d-d8a2404a50d9] Pending
helpers_test.go:352: "netcat-cd4db9dbf-jxnl5" [882d23ca-0b1e-4f3c-962d-d8a2404a50d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 22:01:29.992962  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-jxnl5" [882d23ca-0b1e-4f3c-962d-d8a2404a50d9] Running
E1002 22:01:32.997882  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/old-k8s-version-714101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:34.539649  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/addons-693704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:01:35.114681  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004253781s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m21.045163146s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-644857 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-644857 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-644857 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1002 22:02:05.838225  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:02:46.799697  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/default-k8s-diff-port-842185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.689516284s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-644857 "pgrep -a kubelet"
I1002 22:02:51.634613  993954 config.go:182] Loaded profile config "enable-default-cni-644857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-644857 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nqzxr" [3feb6f33-1727-47c2-94cc-6e5f61b6a33e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nqzxr" [3feb6f33-1727-47c2-94cc-6e5f61b6a33e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003295796s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-644857 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-644857 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-644857 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-5fjm5" [d1774055-0592-4b33-ad93-77ac4c056ecb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00344028s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-644857 "pgrep -a kubelet"
I1002 22:03:08.429291  993954 config.go:182] Loaded profile config "flannel-644857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-644857 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pxnzx" [dac16b5b-2b3a-407e-824a-ef77b7d1d7f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 22:03:10.710758  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-pxnzx" [dac16b5b-2b3a-407e-824a-ef77b7d1d7f1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004202058s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-644857 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-644857 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-644857 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1002 22:03:38.419965  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/no-preload-661954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-644857 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m17.056046902s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-644857 "pgrep -a kubelet"
I1002 22:04:40.780260  993954 config.go:182] Loaded profile config "bridge-644857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-644857 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pkwdc" [99f39d7b-96f7-4f20-b15d-4f16d9396c75] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 22:04:41.443495  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/kindnet-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:44.005135  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/kindnet-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-pkwdc" [99f39d7b-96f7-4f20-b15d-4f16d9396c75] Running
E1002 22:04:49.126672  993954 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-992084/.minikube/profiles/kindnet-644857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003311049s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-644857 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-644857 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-644857 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (31/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-496636 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-496636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-496636
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-013352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-013352
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-644857 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-644857

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-644857

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-644857

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-644857

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-644857

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-644857

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-644857

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-644857

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-644857

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-644857

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-644857

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-644857" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-644857" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-644857

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644857"

                                                
                                                
----------------------- debugLogs end: kubenet-644857 [took: 3.280898062s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-644857" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-644857
--- SKIP: TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-644857 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-644857" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-644857" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-644857" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-644857

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-644857" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644857"

                                                
                                                
----------------------- debugLogs end: cilium-644857 [took: 3.621201909s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-644857" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-644857
--- SKIP: TestNetworkPlugins/group/cilium (3.78s)

                                                
                                    
Copied to clipboard